BLOG

Building Secure Immutable Infrastructure

Introduction

Building a secure AWS environment has many layers – the AWS account access and resource privileges, keeping inventory of the instances, and managing application configuration. This is of course not a one-time effort but a continuous process – the ability to review AWS recourses and access, the ability to check for installed software and unpatched instances, the ability to check who had access to configuration properties.
A separate whitepaper has been released that addresses all these topics.
Taking an Infrastructure as Code (IaC) approach, where the whole infrastructure (AWS resources and access) is treated as code under version control provides full visibility and makes every change traceable and auditable.
Addressing visibility on the OS level – an immutable infrastructure approach is taken – working with images (AMIs), automating the baking process, and starting regular security scans on these images, creating a continuous delivery pipeline – the AMI Factory.
For handling configuration – multiple solutions are taken into consideration – AWS managed like SSM Parameter Store, Secret Store, as well as HashiCorp Vault.
Below are some highlights:

Infrastructure as code

Access to AWS infrastructure happens through API calls. By using the APIs, one can build reliable, predictable and fully automated service management at scale. The approach is known as Infrastructure as Code (IaC) – programmable infrastructure – helping you create, manage and configure it. The process of developing infrastructure follows standard development processes, such as version control, testing, deployment.
Using an Infrastructure as Code approach, one can create a fully auditable, repeatable and consistent AWS infrastructure. Some of the benefits are:

    • It’s auditable – anyone can check the code, peer reviews can be carried out and it’s fully trackable because it’s in the version control system
    • It’s repeatable – the same code can be executed multiple times over different environments and AWS accounts
    • It’s consistent – executing the code guarantees the same result over and over again. In case someone implements a manual change to an AWS resource, it will be overwritten.
    • Through repeatability, it is easy to isolate different environments. In a standard approach, there would typically be a Development, UAT, and Production environments, and they can be all the same by reusing the same code.

Immutable infrastructure

In the mutable infrastructure, approach servers are configured, updated and modified in place. Administrators can either log in to these servers and implement changes or use automation frameworks like Chef, Ansible, Puppet or SaltStack to manage them. Configuration files can be changed, packages can be upgraded or downgraded, users can be added or removed, the software can be directly deployed to the servers. These servers are mutable, they can be changed after they were created.
Immutable infrastructure takes a different approach – where servers once deployed cannot be modified. All the needed software is built into the server image. The configuration is applied at server boot time – that is the way to deploy the same image into multiple environments. The process of building an image is named baking. When a new version has to be deployed or any change made – a new image is baked and deployed. Once it’s verified the old ones can be decommissioned.
Leveraging AWS APIs the baking process is easy to be automated in a continuous pipeline.

AMI Factory for immutable infrastructure

Building AMIs is a repeatable process and in order to be auditable, it has to be fully automated, with no manual intervention.
Here is the solution that we’ve invented and built in cloudxshift. The pipeline is based on AWS CodePipeline, AWS CodeBuild, the code is saved in AWS CodeCommit. These AWS managed services integrate well with AWS IAM, AWS Lambda, Amazon CloudWatch in order to achieve fine-grained permissions model, automation and trackability.
The AMIFactory process is orchestrated by AWS CodePipeline.

    1. CodePipeline starts AWS CodeBuild pulling code from a repository it runs HashiCorp Packer
    2. Packer process does:
      1. Start new EC2 instance
      2. Connect to that EC2 instance and executes predefined scripts
        1. OS update
        2. Apply OS configuration and tuning
        3. Apply CIS hardening on OS
        4. Install application
        5. Install antivirus, IDS, IPS, file integrity check software
        6. Install CloudWatch agent
        7. Install Inspector agent
      3. Register new AMI
    3. AWS CodePipeline executes an AWS Lambda function to start EC2 instance from the newly created AMI. Tag is applied to it, so that on the next step AWS Inspector can scan only this instance.
    4. Inspector scan for CVEs vulnerability scans is started on the EC2 instance
    5. Inspector send notification to SNS when finished
    6. AWS Lambda function which does the analyses of Inspector findings is triggered.

Learn more on the full AMI Factory pipeline, Scheduled security scans, Configuration management tools and Automated incident response in the whitepaper.

Conclusion

Working with immutable infrastructure and pre-build images guarantees that instances deployed in different accounts are the same bit by bit. Operating system, specific configuration of the OS, installed software with dependencies are all the same. The only thing that defers is the service configuration. This makes all the deployment easily reproducible. Automating AMI build and putting it in repository makes the whole infrastructure auditable and changes traceable.
Building AMI Factory as pipeline makes the process auditable and replicable, security scans and continuous delivery is achieved. For more such solution visit www.cloudxshift.com.

Share

Author is Cloud Strategist with an Expertise in the design and delivery of cost - effective, high-performance information technology infrastructures and applications solution to address complex business problems.