AWS Systems Manager, a Recipe for Simplicity

06 Feb 2018

It was around 2003 when I first worked with Amazon Web Services EC2, and to be honest, those days were much simpler. If you needed an instance, you started it from some AMI. If you needed some additional software installed, you did it manually. If you needed this software as a standard part of your instance, you just created a new AMI from the instance you already prepared. If you need a persisted volume, you created and connected it to the appropriate instance manually — even after the instance fully started. To monitor your instances, you would run another instance and install monitoring software of your choice. I personally enjoyed Zabbix.

Installing updates to your OS, like security patches, required you to run an instance and install your software on top of it — thus, creating an AMI using this instance and then re-running all your instances using that AMI. Lastly, if you needed to automate these tasks, you just used a command line client.

We had 10 instances running permanently and were manually starting another 10 to handle the increased load. We always knew the increased load would be necessary as a response to our own email blast.

Nowadays, I’m still dealing with AWS EC2. But things have changed a bit.

First off, it’s rare to see a project hosted on AWS EC2 with 50 instances, but common to see a project with 500+ instances. Software releases are now coming daily, if not hourly. Automation is a trend, and it makes or breaks some projects. Most importantly, I become a senior engineer, and I deserve some simplicity.

Rundown of AWS Systems Manager

Last year, AWS released an AWS Systems Manager, also known as Amazon EC2 Systems Manager. But what exactly is this? According to documentation, it’s a unified interface that allows you to easily centralize operational data and automate tasks across your AWS resources. In terms of actions, it should allow us to automate common maintenance and deployment tasks, run Linux shell scripts and Windows PowerShell commands, automate the process of patching managed instances, allow users to set up recurring schedules for managed instances, and automate the process of keeping managed instances in a defined state. Overall, this is great for simplicity.

Patch Manager requirements are much more strict. Also, instances must be in a supported region. Not a bad thing, as about 16 of 18 regions are supported at the moment. Your instance must be configured to use the Systems Manager Instance Profile.

First, you’ll need to create a Systems Manager Instance Profile. The Systems Manager will be allowed to perform actions on your instances, but not by default. You must grant access by using an IAM instance profile — a pretty straightforward and well documented procedure.

An AWS Systems Manager Agent also must be installed on your instance. The easiest way to start the instance is from a proper AMI. This is easy for Windows, as all the AMIs published in November 2016 or later have already been installed. For Linux, it’s not that easy, as only Amazon Linux and Ubuntu Server 18.04 LTS base AMIs are ready. You will have to create your own AMI the old-fashioned way for other Linux versions.

Also, manually installed AWS Systems Manager Agents will require activation. But I’m looking for simplicity, so I use the AMIs ready for AWS Systems Manager.

Your IAM user account, group, or role has to be assigned with the proper permissions. Systems Manager must be allowed to perform actions on your instances — this is not by default. Again, you must grant access by using an IAM instance profile.

Finally, your instance will need an outbound Internet access, as inbound Internet access is not required. The AWS Systems Manager management unit is a AWS Resource Group. AWS Resource Group itself is a collection of AWS resources that are all in the same AWS region, and they match the criteria provided in a query. That means you will not be able to use your fleet as a single unit in case you’re spread throughout regions.

That’s just about everything you need to know about AWS Systems Manager.

Making the Switch to AWS Systems Manager

AWS Systems Manager is quite a new tool, so you probably already have the management and monitoring systems based on some legacy tools like Ansible and Prometheus. But of course, AWS Systems Manager is not an absolute weapon, and switching will require significant effort.

I believe some of the advantages of switching include:

  • Fine grain security. You can configure different access for different users to different groups. This is not that easy with Ansible or Prometheus.
  • Your instance is ready for AWS Systems Manager with a standard AMI.
  • AWS Systems Manager Web GUI is actually pretty good.
  • Your fleet inventory is accessible with tools in the same place as the configuration tools.
  • AWS Resource Group is formed by a query, so you can limit the number and sort of the instances you are currently dealing with. Of course this is possible with legacy tools, but not natively. I upgraded PHP 5.6 to 7 on 20+ instances with Ansible once just because I made a typo in a group name.

Ultimately, making the switch is up to you, and it really depends on how satisfied you are with your current system. But if you’re building an infrastructure for some new project, build it compatible with AWS Systems Manager. It’s really not expensive and could be surprisingly useful later.