Tuesday, March 29, 2022

Pattern for replacable pet servers on AWS

So, here is a problem:

You have a server that you want to be able to replace.  Usually this would be done by having some kind of configuration management, packed image, or cloud-init startup scripts that set it back up.  However, it’s a pet.. it’s way too much work to setup full configuration management, and it’s config files get updated by people logging in to it anyway.

Now, sure, you could just create an AMI of it and restore the AMI if something goes wrong.. but this has some downsides.   The AMI gets out of date if it is not created frequently.  You also have to patch in place and it’s a fairly complex job to go from a major OS release to another, or to switch distro.

Here is a pattern I use that seems to make this much less painful, where I have a single pet.  This is basically like etckeeper, but is even more basic.

  1. I setup a user data (cloud-init) script to do the basics of setting host name, installing packages, configuring bind mounts, starting services, installing cron jobs, and other random bits.
  2. I create an EFS share, iam role and mount that on the server as /config
  3. I bind mount the bits of the OS config/data I actually care about into the filesystem, using a cloud-init shell script.
mkdir -p /config
echo ${efs-fs-id} /config    efs >> /etc/fstab
mount /config
for i in /etc/httpd/conf/httpd.conf /etc/named.conf /var/spool/cron /usr/local /root; do
echo /config$i $i none defaults,bind >> /etc/fstab
mount $i
done

This gives me a server that I can taint and replace with Terraform, I can replace the AMI with minimal effort, and the EFS data is of course backed up with AWS backup.  It means a normal server admin can easily see what is going to stay if they edit it, and can go edit config files without having to fight configuration management.

OS Configuration management saves times, if you have 3 or more servers that need to be configured the same / similar -  but for a single host, it sometimes takes longer than just configuring it.

No comments: