If you’re in the cloud industry, you’ve probably heard of the concept of Infrastructure As Code. If not, it’s a way of defining your infra as code or in configuration files. This enables the use of version control, reviews and traceability that you wouldn’t have if making manual changes in your system. Any manual interaction should be avoided. It is best practice for multiple reasons, including disaster recovery, efficiency and cost.
There are several tools and approaches to choose from when it comes to Infrastructure as Code on different cloud platforms. AWS CloudFormation, AWS CDK and Terraform are all examples of tools utilized for IaC on AWS.
As a daily user of Terraform and with experience from several big AWS environments from an engineering and operational perspective, I’ve picked up some tools and tricks that I’ve found useful in my daily work. In this post I will share some of my insights.
1. Standardise and automate from the start
To stay consistent, enforce best practices and speed things up you should always strive to standardize. Standardize and automate your workflow all the way from your local environment to your fancy CI/CD pipelines into the cloud.
It’s not a super simple task right off the bat and you don’t have to overdo it, since requirements can change as the project develops and scales. But with some basic steps and tools, you can start the process and set up a project in a way that makes it easy to enforce practices and add standards as it evolves.
Centralized version control is key to enable collaboration and to set up pipelines where you can run jobs to get continuous feedback on your configuration files. Once you have version control and a pipeline to run jobs in, you can easily add steps to test and validate new things as you go. Something that you should decide early on is the process and standard for documentation of your resources. One tool that can help you generate documentation for your terraform modules is terraform-docs, worth checking out!
The community built around Terraform and the other Hashicorp products is large and active, and there are many tools and plugins out there that can help setting up your workflow both locally and in your CI/CD systems. I use several of them daily and want to share some that I really recommend:
Tfenv is a version management tool for terraform versions that is useful if you manage a lot of different projects using different versions of terraform. You use simple commands to install, list and switch terraform versions in your workspaces.
Examples of use:
❯ tfenv list , lists installed versions of terraform.
❯ tfenv install , installs a version of terraform.
❯ tfenv use , when choosing default version.
You can install tfenv with brew using
brew install tfenv or follow the installation instructions here.
Tflint is, as the name suggests, a linter for Terraform. A linter is a tool that will analyse the code for potential errors. Tflint will warn about deprecated syntax, unused declarations and enforce best practices and naming conventions. You can decide which rules you want to apply to the analysis.
Install it using brew on Mac with
brew install tflint or using Chocolatey on Windows
choco install tflint. Read more about tflint here.
Atlantis When working with larger teams of engineers there are a couple things you may worry about related to your infrastructure deployment process that Atlantis can help you secure and speed up.
Atlantis is a tool used for automating pull requests for your Terraform configurations. The advantages include that you don’t have to grant all engineers permissions to run a terraform plan or apply in all environments. You also have a centralized way of managing your projects’ terraform versions and you’ll get a GitOps workflow with control over what will be deployed when, and making sure it’s reviewed before it is applied.
To use Atlantis, you have to deploy it into your infrastructure and then use webhooks to let it respond to your Pull Requests automatically. The official page and instructions for the tool is really good and the steps to set it up are found here.
Checkov is a tool for policy as code and static code analysis that scans your configuration files to find misconfigurations before they’re deployed. It’s best used in your CI/CD pipelines for continuous feedback through your deployment process. It can be used with various IaC languages and cloud platforms. It comes with over 1000 predefined policies to cover security and compliance best practices for AWS, Azure and GCP resources and you can also define your own. For a list of all predefined AWS related policies, follow this link.
If working in a team, this is a great way to standardize the best practices you want to apply to your infrastructure.
An example of Checkov output from their official get-started guide:
You can include and exclude checks easily inside your terraform configuration files:
Read more about Checkov here.
2.Make use of the native Terraform resources
Something really worth mentioning is that the Terraform CLI, the official terraform docs and the terraform registry are extremely useful and being comfortable around these should be the first thing to focus on before installing any third-party tools.
The terraform CLI has some neat commands to use for formatting and validating your code out of the box. If you write Terraform frequently you are probably already familiar with them, but here are my best basic commands to use if you are a new user:
● terraform fmt – formats your terraform configuration files.
● terraform validate – validates your terraform code.
● terraform state list – lists all the resources in the workspace.
● terraform import – imports existing resources into terraform.
Read more about all the available CLI commands here.
Whenever I need to declare a resource that I don’t know by heart, if I need to know what attributes are available, how to import a resource or which data sources exist for a service, I head to the terraform docs for the AWS provider and search for the resources in question. I wanted to include it in this post since I really wanted to empathize that the official documentation is very useful.
The Terraform registry is a place you would want to visit frequently. I’ll explain more why in the next section about reusable infrastructure.
3. Create reusable infrastructure
Don’t repeat yourself – DRY. This is a common practice that I learned to live by while working as a developer. It represents the idea of creating code or configuration where you don’t have to change anything in more than one place, where everything is represented by variables and can be reused by multiple users or projects by grouping resources in classes or packages.
The terraform modules come in handy here, and you should know how to create them. Writing good modules will make your infrastructure reusable and save you a lot of time. Modules isolates resources and abstracts the configuration from the user. This is something I’ve been using a lot in my role as a “DevOps” or “Cloud Operations” person where I write modules for developers to use so that they can have control over their infrastructure but don’t have to deep dive into all the technical details.
But remember! Don’t spend too much time writing a module that is bullet proof and will survive all kinds of new use cases and scaling scenarios. I’ve sometimes found myself thinking a bit too far ahead and creating stuff that never gets used or already exists. As mentioned in the previous section, you should get familiar with the terraform registry. There are tons of already made, official modules for numerous providers. Don’t reinvent the wheel, browse the modules and get an overview of what exists, if there is a module that fits your case – use it! And also, don’t be afraid to contribute to the community modules and providers that exist.
Hope you enjoyed this post and that it gave you some insights about how you can work with terraform and IaC. Please comment and let me know if you have any tools or practices you think are worth mentioning!