Wilson Mar bio photo

Wilson Mar


Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

Immutable declarative versioned Infrastructure as Code (IaC) and Policy as Code provisioning AWS, Azure, GCP, and other clouds using Atlantis team versioning GitOps

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean


terraform.io (Hashicorp’s marketing home page) says the product is a “tool for building, changing, and versioning infrastructure safely and efficiently”.

“Terraform make infrastructure provisioning: Repeatable. Versioned. Documented. Automated. Testable. Shareable.”

This tutorial is a step-by-step hands-on deep yet succinct introduction to using Hashicorp’s Terraform to build, change, and version resources running in clouds.

Atlantis on Terraform

This workflow enhances the traditional core Terraform workflow* with GitHub’s Pull Request and webhooks mechanism to ensure code reviews.

Atlantis was created in 2017 by Anubhav Mishra and Luke Kysow. Before they joined Hashicorp in 2018, they saw Hootsuite use their github.com/runatlantis/atlantis, a self-hosted golang application that listens for Terraform pull request events via webhooks. It can run as a Golang binary or Docker image deployed on VMs, Kubernetes, Fargate, etc.

Read the description and benefits at runatlantis.io:


Developers and Operations people type atlantis plan and atlantis apply in the GitHub GUI to triggers Atlantis invoking terraform plan and terraform apply in the CLI.


Atlantis-based workflow with Terraform Enterprise


  1. In your GitHub account Developer settings, generate a Personal Access Token (named “Terraform Atlantis”) and check only repo scope (to run webhooks).

    CAUTION: This is a static secret which should be updated occassionally.

    Click the clipboard icon. On your MacOS Terminal, within a project folder, install Atlantis bootstrap locally and provide the GitHub PAT.

    Atlantis creates a starter GitHub repo, then downloads the ngrok utility to fork an “atlantis-example” repo under your account. It sets up a server at ngrok.io.

  2. Copy in base Terraform configuration files.

    Within files are references to reusable modules used by other projects.

    An atlantis.yaml file specifies projects to be automatically planned when a module is modified.

  3. Manually run tf init to install cloud provider plug-ins.

  4. In main.tf add a null resource as a test: from perhaps https://github.com/jnichols3/terraform-envs

    resource "null_resource" "demo" {}
  5. Anyone can open up a pull request in the GitHub repo holding your Terraform configuration files.

    This ensures that other team members are aware of changes pending. When plan is run, the directory and Terraform workspace are Locked until the pull request is merged or closed, or the plan is manually deleted. With locking, you can ensure that no other changes will be made until the pull request is merged. https://www.runatlantis.io/docs/locking.html#why

  6. Instead of manually invoking terraform plan, Atlantis invokes them when atlantis planis typed in GitHub GUI which triggers the Atlantis server to run. Atlantis can be invoked automatically on any new pull request or new commit to an existing pull request.

and adds comments on the pull request in addition to creating an execution plan with dependencies.

atlantis plan can be for a specific directory or workspace


  1. Those licenced to use Terrform Cloud as a remote backend provisioner, sentinel apply is also invoked to create cost projections and policy alerts based on sentinel policy definitions.

  2. Someone else on your team reviews the pull request, makes edits and rerun atlantis plan several times before clicking approve PR.

  3. In a GitHub GUI comment, type atlantis apply to trigger Atlantis to run terraform apply and add comments about its provisioning of resources. Atlantis makes output from apply visible in GitHub.

    Atlantis can be configured to automatically merge a pull request after all plans have been successfully applied.*


    Note that apply creates tfstate files.

  4. Optionally, a “local-exec” provisioner can invoke Ansible to configure programs inside each server.

Repeatable from versioning

Terraform provides a single consistent set of commands and workflow on all clouds. That is “future proofing” infastructure work.

Use of version-controlled configuration files in an elastic cloud means that the infrastructure Terraform creates can be treated as disposable. This is a powerful concept. Parallel production-like environments can now be created easily (without ordering hardware) temporarily for experimentation, testing, and redundancy for High Availability.


Terraform is better characterized as a multi-service tool rather than a “multi-cloud tool”. PROTIP: One would need to rewrite templates to move from, say, AWS to Azure. Terraform doesn’t abstract resources needed to do that. However, it does ease migration among clouds to avoid cloud vendor lock-in.

Terraform provides an alternative to each cloud vendor’s IaC solution:

  • AWS - Cloud Formation & CDK
  • Microsoft Azure Resource Manager Templates
  • Google Cloud Platform Deployment Manager
  • OpenStack Heat (on-premises)

Terraform can also provision on-premises servers running OpenStack, VMWare vSphere, and CloudStack as well as AWS, Azure, Google Cloud, Digitial Ocean, Fastly, and other cloud providers (responsible for understanding API interacitons and exposing resources).

Infrastructure as Code (IaC)

The objective is to accellerate work AND save money by automating the configuration of servers and other resources quicker and more consistently than manually clicking through the GUI. That’s called the “Infrastructure-Application Pattern (I-A)”.

BLOG: Analysis:

 MaturityCommunityType Infra.Lang. AgentMaster
CFN/CF2011 MediumSmall*1Provisioning ImmutableDeclarative NoNo
Heat2012 LowSmallProvisioning ImmutableDeclarative NoNo
Terraform2014 LowHugeProvisioning ImmutableDeclarative NoNo
Pulumi>2017 LowNewProvisioning MutableProcedural YesYes

Terraform installs infrastructure in cloud and VM as workflows.

Kubernetes orchestrates (brings up and down) Docker containers.

Terraform vs. AWS Cloud Formation

Feature CloudFormation Terraform
Multi-Cloud providers support AWS only AWS, GCE, Azure (20+)
Source code closed-source open source
Open Source contributions? No Yes (GitHub issues)
State management by AWS within Terraform
GUI Free Console licen$ed*
Configuration format JSON HCL JSON
Execution control* No Yes
Iterations No Yes
Manage already created resources No Yes (hard)
Failure handling Optional rollback Fix & retry
Logical comparisons No Limited
Extensible Modules No Yes

To get AWS certified, you’re going to need to know Cloud Formation.

Going from CFN yaml to HCL?

The options:

  1. Ruby-based https://github.com/dtan4/terraforming exports existing AWS resources to Terraform style tf, tfstate. It also comes as a Docker container.

  2. Install on your MacOS laptop this utlity from Google to create HCL from exiting running cloud resources. This enables you to transition from what was created in the AWS GUI or CFN to HCL you can modify:

    brew info terraformer
    brew install terraformer
  3. Deploy your existing CFT instead of trying to convert it:


  4. It may be possible for simple cases but perhaps very complex (almost impossible) to convert CFT intrinsic functions:


*1 - CF/CFN (CloudFormation) is used only within the AWS cloud while others operate on several clouds. CFN is the only closed-sourced solution on this list. Code for Terraform is open-sourced at https://github.com/hashicorp/terraform

Those who create AMI’s also provide CFN templates to customers.* (cloudnaut.io has free templates)

TOOL: Troposphere and Sceptre makes CFN easier to write with basic loops and logic that CFN lacks.

But in Sep 2018 CloudFormation got macros to do iteration and interpolation (find-and-replace). Caveat: it requires dependencies to be setup.

CF/CFN (Cloud Formation) limits the size of objects uploaded to S3.

Can’t really do that with CFN alone. Even though Cloud Formation has nested stack only for AWS.

AWS Cloud Formation and Terraform can both be used at the same time. Terraform is often used to handle security groups, IAM resources, VPCs, Subnets, and policy documents; while CFN is used for actual infrastructural components, now that cloud formation has released drift detection.

NOTE: “Combined with cfn-init and family, CloudFormation supports different forms of deployment patterns that is much more awkward to do in Terraform. ASGs with different replacement policies, automatic rollbacks based upon Cloudwatch alarms, and so forth are all well documented and work pretty straight forward in CloudFormation due to the state being managed purely internal to AWS. Terraform is not really an application level deployment tool and you wind up rolling your own. Working out an odd mix of null resources and shell commands to deploy an application while trying to roll back is not straightforward and seems like a lot of reinventing the wheel.”

References about CFN:

Configuration Management

 MaturityCommunityType Infra.Lang. AgentMaster
Puppet2005 HighLarge Config. Mgmt. MutableDeclarative YesYes
Chef2009 HighLarge Config. Mgmt. MutableProcedural YesYes
SaltStack2011 MediumLarge Config. Mgmt. MutableDeclarative YesYes
Ansible2012 MediumHuge, fastest growing Config. Mgmt. MutableProcedural NoNo

Terraform and Ansible can work in unison and complement each other. Terraform can bootstrap the underlying cloud infrastructure, then Ansible provisions the user space. To test a service on a dedicated server, skip using Terraform and run the Ansible playbook on that machine. Derek Morgan has a “Deploy to AWS with Ansible and Terraform” video class at LinuxAcademy which shows how to do just that, with code and diagram.

“Procedural” means “programmatic” as in a Python or JavaScript program applies logic. This means procedures need to be written to check whether a desired resource is available before provisioning, then logic is needed to check whether the provisioning command was effective.

“Declarative” means a (yaml format) file defines what is desired, and the system makes it so. tf files are declarative, meaning that they define the desired end-state (outcomes). If 15 servers are declared, Terraform automatically adds or removes servers to end up with 15 servers rather than specifying procedures to add 5 servers. Terraform can do that because Terraform knows how many servers it has setup already.

IaC code is idempotent (repeated runs results in what is described, and does not create additional items with every run). Terraform takes action only when needed (called “convergence” principle).

Terraform manages explicit and implicit (assumed) dependencies automatically.

Terraform automatically takes care of performing in the correct sequence.


PROTIP: WARNING: Terraform does not support rollbacks of changes made.

“Immutable” means once instantiated, components cannot be changed. In DevOps, this strategy means individual servers are treated like “cattle” (removed from the herd) and not as “pets” (courageously kept alive as long as possible).

Immutable and idempotent means “when I make a mistake in a complicated setup, I can get going again quickly and easily with less troubleshooting because I can just re-run the script.”

Parallel execution

A key differentiator of Terraform is its plan command, which provides more than just a “dry-run” before configurations are applied for real. Terraform identifies dependencies among components requested, and creates them in the order needed.

Dependency Graph

A Resource Graph of dependencies can be created by the terraform graph command (click image for full screen: terraform-dependency-graph-2257x1019

The above is from “Solving Infrastructure Challenges with Terraform” 5h videos on CloudAcademy by Rogan Rakai using GCP and VSCode on https://github.com/cloudacademy/managing-infrastructure-with-terraform to create a two-tier sample WordPress app with a MYSQL_5_7 database, both running under Kubernetes (GKE), with a replica in another region.

A more colorful format using Blast Radius [examples]:


There is also a “webgraphwiz” tool.

Under the covers, Terraform plan generates an executable, and uses it to apply configuration to create infrastructure. This guarantees that what appeared in plan is the same as when apply occurs.

When Terraform analyzes a configuration specification, it recognizes where parallel execution can occur, which means faster runs to create real infrastructure.

Terraform control, iterations, and (perhaps most of all) management of resources already created (desired state configuration) over several cloud providers (not just AWS).


Licensing open source for GUI

Although Terraform is “open source”, the Terraform GUI requires a license.

Paid Pro and Premium licenses of Terraform add version control integration, MFA security, HA, and other enterprise features.

Websites to know


Install to use Docker

  1. To install Docker CE on Linux:

    sudo apt-get update
    sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    sudo apt-get update
    sudo apt-get install docker-ce

Install Terraform locally

PROTIP: Terraform is written in the Go language, so (unlike Java) there is no separate VM to download.

  1. After installation, get the version number of Terraform:

    terraform --version

    The response I got (at time of writing) is:

    Terraform v0.12.24

    WARNING: The response at time of writing, Terraform is “not even 1.0.0” release, as in it’s in beta maturity.

Install on MacOS using tfenv

  1. A search through brew:

     brew search terraform
    ==> Formulae
    iam-policy-json-to-terraform             terraform@0.11
    terraform                                terraform@0.12
    terraform-docs                           terraform@0.13
    terraform-inventory                      terraform_landscape
    terraform-ls                             terraformer
    terraform-provider-libvirt               terraforming
    If you meant "terraform" specifically:
    It was migrated from homebrew/cask to homebrew/core.

    This is used to install a back version.

  2. Is there a brew for Terraform?

    brew info terraform

    Yes, but:

    terraform: stable 1.0.5 (bottled), HEAD
    Tool to build, change, and version infrastructure
    Conflicts with:
      tfenv (because tfenv symlinks terraform binaries)
    Not installed
    From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/terraform.rb
    License: MPL-2.0
    ==> Dependencies
    Build: go ✘
    ==> Options
    Install HEAD version
    ==> Analytics
    install: 41,443 (30 days), 125,757 (90 days), 480,344 (365 days)
    install-on-request: 38,839 (30 days), 118,142 (90 days), 455,572 (365 days)
    build-error: 0 (30 days)
  3. PROTIP: Although you can brew install terraform, don’t. So that you can easily switch among several versions installed of Terraform, install and use the Terraform version manager:

    brew install tfenv

    The response at time of writing:

    ==> Downloading https://github.com/tfutils/tfenv/archive/v2.2.0.tar.gz
    Already downloaded: /Users/wilson_mar/Library/Caches/Homebrew/downloads/d5f3775943c8e090ebe2af640ea8a89f99f7f0c2c47314d76073410338ae02de--tfenv-2.2.0.tar.gz
    🍺  /usr/local/Cellar/tfenv/2.2.0: 23 files, 79.8KB, built in 8 seconds

    Source for this is has changed over time: from https://github.com/Zordrak/tfenv (previously from https://github.com/kamatama41/tfenv)

    When tfenv is used, do not install from the website or using :

    brew install terraform

  4. Install the latest version of terraform using tfenv:

    tfenv install latest

    The response:

    Installing Terraform v1.0.5
    Downloading release tarball from https://releases.hashicorp.com/terraform/1.0.5/terraform_1.0.5_darwin_amd64.zip
    ######################################################################### 100.0%
    Downloading SHA hash file from https://releases.hashicorp.com/terraform/1.0.5/terraform_1.0.5_SHA256SUMS
    ==> Downloading https://ghcr.io/v2/homebrew/core/pcre/manifests/8.45
    ######################################################################## 100.0%
    ==> Downloading https://ghcr.io/v2/homebrew/core/pcre/blobs/sha256:a42b79956773d
    ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh
    ######################################################################## 100.0%
    ==> Downloading https://ghcr.io/v2/homebrew/core/grep/manifests/3.7
    ######################################################################## 100.0%
    ==> Downloading https://ghcr.io/v2/homebrew/core/grep/blobs/sha256:180f055eeacb1
    ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh
    ######################################################################## 100.0%
    ==> Installing dependencies for grep: pcre
    ==> Installing grep dependency: pcre
    ==> Pouring pcre--8.45.mojave.bottle.tar.gz
    🍺  /usr/local/Cellar/pcre/8.45: 204 files, 5.5MB
    ==> Installing grep
    ==> Pouring grep--3.7.mojave.bottle.tar.gz
    ==> Caveats
    All commands have been installed with the prefix "g".
    If you need to use these commands with their normal names, you
    can add a "gnubin" directory to your PATH from your bashrc like:
    ==> Summary
    🍺  /usr/local/Cellar/grep/3.7: 21 files, 941.7KB
    ==> Upgrading 1 dependent:
    zsh 5.7.1 -> 5.8_1
    ==> Upgrading zsh
      5.7.1 -> 5.8_1
    ==> Downloading https://ghcr.io/v2/homebrew/core/zsh/manifests/5.8_1
    ######################################################################## 100.0%
    ==> Downloading https://ghcr.io/v2/homebrew/core/zsh/blobs/sha256:a40a54e4b686eb
    ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh
    ######################################################################## 100.0%
    ==> Pouring zsh--5.8_1.mojave.bottle.tar.gz
    🍺  /usr/local/Cellar/zsh/5.8_1: 1,531 files, 13.5MB
    Removing: /usr/local/Cellar/zsh/5.7.1... (1,515 files, 13.3MB)
    ==> Checking for dependents of upgraded formulae...
    ==> No broken dependents found!
    ==> Caveats
    ==> grep
    All commands have been installed with the prefix "g".
    If you need to use these commands with their normal names, you
    can add a "gnubin" directory to your PATH from your bashrc like:
    Unable to verify OpenPGP signature unless logged into keybase and following hashicorp
    Archive:  tfenv_download.qXFIgg/terraform_1.0.5_darwin_amd64.zip
      inflating: /usr/local/Cellar/tfenv/2.2.2/versions/1.0.5/terraform
    Installation of terraform v1.0.5 successful. To make this your default version, run 'tfenv use 1.0.5'

    PROTIP: The above commands create folder .terraform.d on your $HOME folder, containing files checkpoint_cache and checkpoint_signature.

    See Hashicorp’s blog about version announcements.

  5. Make the latest the default:

    tfenv use 1.0.5
    Switching default version to v1.0.5
    Switching completed
  6. Proceed to Configuration below.

Terragrunt from Gruntwork

A popular replacement of some standard terraform commands are terragrunt commands open-sourced at https://github.com/gruntwork-io/terragrunt by Gruntwork:

   terragrunt get
   terragrunt plan
   terragrunt apply
   terragrunt output
   terragrunt destroy

These wrapper commands provide a quick way to fill in gaps in Terraform - providing extra tools for working with multiple Terraform modules, managing remote state, and keeping DRY (Don’t Repeat Yourself), so that you only have to define it once, no matter how many environments you have.

Unlike Terraform, Terragrunt can configure remote state, locking, extra arguments, etc.

WARNING: There are some concerns about Terragrunt’s use of invalid data structures. See https://github.com/gruntwork-io/terragrunt/issues/466

QUESTION: Terraform Enterprise cover features of Terragrunt?

Install on MacOS:

  1. To install Terragrunt on macOS:

    brew unlink tfenv
    brew install terragrunt
    brew unlink terraform
    brew link --overwrite tfenv

    The unlink is to avoid error response:

    Error: Cannot install terraform because conflicting formulae are installed.
      tfenv: because tfenv symlinks terraform binaries
    Please `brew unlink tfenv` before continuing.
    Unlinking removes a formula's symlinks from /usr/local. You can
    link the formula again after the install finishes. You can --force this
    install, but the build may fail or cause obscure side effects in the
    resulting software.


    ==> Installing dependencies for terragrunt: terraform
    ==> Installing terragrunt dependency: terraform
    ==> Downloading https://homebrew.bintray.com/bottles/terraform-0.12.24.catalina.
    Already downloaded: /Users/wilson_mar/Library/Caches/Homebrew/downloads/041f7578654b5ef316b5a9a3a3af138b602684838e0754ae227b9494210f4017--terraform-0.12.24.catalina.bottle.tar.gz
    ==> Pouring terraform-0.12.24.catalina.bottle.tar.gz
    🍺  /usr/local/Cellar/terraform/0.12.24: 6 files, 51.2MB
    ==> Installing terragrunt
    ==> Downloading https://homebrew.bintray.com/bottles/terragrunt-0.23.10.catalina
    ==> Downloading from https://akamai.bintray.com/d6/d6924802f5cdfd17feae2b561ab9d
    ######################################################################## 100.0%
    ==> Pouring terragrunt-0.23.10.catalina.bottle.tar.gz
    🍺  /usr/local/Cellar/terragrunt/0.23.10: 5 files, 30.4MB
  2. For the Terragrunt menu on macOS:


    Expand the Terminal/console window edge for full screen to see all lines without wrapping:

    terragrunt - Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple
    Terraform modules, remote state, and locking. For documentation, see https://github.com/gruntwork-io/terragrunt/.
    terragrunt <COMMAND> [GLOBAL OPTIONS]
    run-all               Run a terraform command against a 'stack' by running the specified command in each subfolder. E.g., to run 'terragrunt apply' in each subfolder, use 'terragrunt run-all apply'.
    terragrunt-info       Emits limited terragrunt state on stdout and exits
    validate-inputs       Checks if the terragrunt configured inputs align with the terraform defined variables.
    graph-dependencies    Prints the terragrunt dependency graph to stdout
    hclfmt                Recursively find hcl files and rewrite them into a canonical format.
    aws-provider-patch    Overwrite settings on nested AWS providers to work around a Terraform bug (issue #13018)
    *                     Terragrunt forwards all other commands directly to Terraform
    terragrunt-config                            Path to the Terragrunt config file. Default is terragrunt.hcl.
    terragrunt-tfpath                            Path to the Terraform binary. Default is terraform (on PATH).
    terragrunt-no-auto-init                      Don't automatically run 'terraform init' during other terragrunt commands. You must run 'terragrunt init' manually.
    terragrunt-no-auto-retry                     Don't automatically re-run command in case of transient errors.
    terragrunt-non-interactive                   Assume "yes" for all prompts.
    terragrunt-working-dir                       The path to the Terraform templates. Default is current directory.
    terragrunt-download-dir                      The path where to download Terraform code. Default is .terragrunt-cache in the working directory.
    terragrunt-source                            Download Terraform configurations from the specified source into a temporary folder, and run Terraform in that temporary folder.
    terragrunt-source-update                     Delete the contents of the temporary folder to clear out any old, cached source code before downloading new source code into it.
    terragrunt-iam-role                          Assume the specified IAM role before executing Terraform. Can also be set via the TERRAGRUNT_IAM_ROLE environment variable.
    terragrunt-iam-assume-role-duration          Session duration for IAM Assume Role session. Can also be set via the TERRAGRUNT_IAM_ASSUME_ROLE_DURATION environment variable.
    terragrunt-ignore-dependency-errors          *-all commands continue processing components even if a dependency fails.
    terragrunt-ignore-dependency-order           *-all commands will be run disregarding the dependencies
    terragrunt-ignore-external-dependencies      *-all commands will not attempt to include external dependencies
    terragrunt-include-external-dependencies     *-all commands will include external dependencies
    terragrunt-parallelism  N>                   *-all commands parallelism set to at most N modules
    terragrunt-exclude-dir                       Unix-style glob of directories to exclude when running *-all commands
    terragrunt-include-dir                       Unix-style glob of directories to include when running *-all commands
    terragrunt-check                             Enable check mode in the hclfmt command.
    terragrunt-hclfmt-file                       The path to a single hcl file that the hclfmt command should run on.
    terragrunt-override-attr                     A key=value attribute to override in a provider block as part of the aws-provider-patch command. May be specified multiple times.
    terragrunt-debug                             Write terragrunt-debug.tfvars to working folder to help root-cause issues.
    terragrunt-log-level                         Sets the logging level for Terragrunt. Supported levels: panic, fatal, error, warn (default), info, debug, trace.
    terragrunt-strict-validate                   Sets strict mode for the validate-inputs command. By default, strict mode is off. When this flag is passed, strict mode is turned on. When strict mode is turned off, the validate-inputs command will only return an error if required inputs are missing from all input sources (env vars, var files, etc). When strict mode is turned on, an error will be returned if required inputs are missing OR if unused variables are passed to Terragrunt.
    Gruntwork <www.gruntwork.io>
  3. To define:

    terragrunt = {
      # (put your Terragrunt configuration here)

Install on Windows

  1. In a Run command window as Administrator.
  2. Install Chocolatey cmd:
  3. Install Terraform using Chocolatey:

    choco install terraform -y

    The response at time of writing:

    Chocolatey v0.10.8
    Installing the following packages:
    By installing you accept licenses for the packages.
    Progress: Downloading terraform 0.10.6... 100%
    terraform v0.10.6 [Approved]
    terraform package files install completed. Performing other installation steps.
    The package terraform wants to run 'chocolateyInstall.ps1'.
    Note: If you don't run this script, the installation will fail.
    Note: To confirm automatically next time, use '-y' or consider:
    choco feature enable -n allowGlobalConfirmation
    Do you want to run the script?([Y]es/[N]o/[P]rint): y
    Removing old terraform plugins
    Downloading terraform 64 bit
      from 'https://releases.hashicorp.com/terraform/0.10.6/terraform_0.10.6_windows_amd64.zip'
    Progress: 100% - Completed download of C:\Users\vagrant\AppData\Local\Temp\chocolatey\terraform\0.10.6\terraform_0.10.6_windows_amd64.zip (12.89 MB).
    Download of terraform_0.10.6_windows_amd64.zip (12.89 MB) completed.
    Hashes match.
    Extracting C:\Users\vagrant\AppData\Local\Temp\chocolatey\terraform\0.10.6\terraform_0.10.6_windows_amd64.zip to C:\ProgramData\chocolatey\lib\terraform\tools...
     ShimGen has successfully created a shim for terraform.exe
     The install of terraform was successful.
      Software installed to 'C:\ProgramData\chocolatey\lib\terraform\tools'
    Chocolatey installed 1/1 packages.
     See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
  4. Proceed to Configuration below.

Install on Linux

To manually install on Ubuntu:

  1. On a Console (after substituing the current version):

    sudo curl -O https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_linux_amd64.zip
    sudo apt-get install unzip
    sudo mkdir /bin/terraform 
    sudo unzip terraform_0.11.5_linux_amd64.zip -d /usr/local/bin/


Instructions below are for the Command Line.

If you prefer using Python, there is a Python module to provide a wrapper of terraform command line tool at https://github.com/beelit94/python-terraform

Command Alias list & help

  1. For a list of commands, use the abbreviated alternate to the terraform command:


    Alternately, use the long form:


    Either way, the response is a menu (at time of writing):

    Usage: terraform [global options] <subcommand> [args]
    The available commands for execution are listed below.
    The primary workflow commands are given first, followed by
    less common or more advanced commands.
    Main commands:
      init          Prepare your working directory for other commands
      validate      Check whether the configuration is valid
      plan          Show changes required by the current configuration
      apply         Create or update infrastructure
      destroy       Destroy previously-created infrastructure
    All other commands:
      console       Try Terraform expressions at an interactive command prompt
      fmt           Reformat your configuration in the standard style
      force-unlock  Release a stuck lock on the current workspace
      get           Install or upgrade remote Terraform modules
      graph         Generate a Graphviz graph of the steps in an operation
      import        Associate existing infrastructure with a Terraform resource
      login         Obtain and save credentials for a remote host
      logout        Remove locally-stored credentials for a remote host
      output        Show output values from your root module
      providers     Show the providers required for this configuration
      refresh       Update the state to match remote systems
      show          Show the current state or a saved plan
      state         Advanced state management
      taint         Mark a resource instance as not fully functional
      test          Experimental support for module integration testing
      untaint       Remove the 'tainted' state from a resource instance
      version       Show the current Terraform version
      workspace     Workspace management
    Global options (use these before the subcommand, if any):
      -chdir=DIR    Switch to a different working directory before executing the
                 given subcommand.
      -help         Show this help output, or the help for a specified subcommand.
      -version      An alias for the "version" subcommand.

    NOTE: The terraform remote command configures remote state storage.

    BLAH: Terraform doesn’t have an alias command (like Git) to add custom subcommands, so one has to remember which command is Terragrunt and which are standard Terraform?

  2. Install Terragrunt wrapper:


  3. Help on a specific command, for example:

    terraform plan --help

    Terraform Console

  4. Open the Terraform Console (REPL) from a Terminal/command shell:

    terraform console

    The response is the prompt:

  5. Commands can interpret numbers:


    The response is (because counting begins from zero):

    1:3: element: expected 2 arguments, got 3 in:
  6. Type exit or press (on a Mac) control+C to return to your Terminal window.

    The program also expects an additional top level in all .tfvars files:

    You should now be at your operating system console.

Reusable Modules

Putting Terraform code in modules enable their reuse by several, which speed development and reduces testing.

But some documentation and training is necessary.

For example, to create a simple AWS VPC (Virtual Private Cloud),

  1. Allocate IPs outside the VPC module declaration.

    resource "aws_eip" "nat" {
      count = 3
      vpc = true
  2. Set: https://github.com/terraform-aws-modules/terraform-aws-vpc/tree/master/examples

    module "vpc" {
      source = "terraform-aws-modules/vpc/aws"
      name = "my-vpc"
      cidr = ""
      azs             = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
      private_subnets = ["", "", ""]
      public_subnets  = ["", "", ""]
      enable_nat_gateway = true
      enable_vpn_gateway = true
      tags = {
     Terraform = "true"
     Environment = "dev"
    • “azs” designates Availability Zones.

Community modules

Terrafrom provides its own modules. But where Terraform comes up short, customer administrators can write modules of their own to add more logic to continue using declarative specifications (templates). Thus Terraform defines the “desired state configuration” (DSC).

Terraform Modules are how to add “smartness” to manage each DevOps component:


Blogs and tutorials on modules:

Provider credentials

Since the point of Terraform is to get you into clouds, Terraform looks for specific environment variables containing AWS credentials.

  1. Got to IAM in AWS to define a user with a password.
  2. Grant rules to the AWS user to use services.
  3. Mac users: add credentials in their ~/.bash_profile these lines:

    export AWS_ACCESS_KEY_ID=(your access key id)
    export AWS_SECRET_ACCESS_KEY=(your secret access key)
    export AWS_REGION=(your region in AWS)

    For Azure:

    AZ_REGION=""  # aka location

    For Google Cloud:


PROTIP: Specifying passwords in environment variables is more secure than typing passwords in tf files*.

Sample Terraform scripts


Terraform on AWS

VIDEO: Implementing Terraform with AWS by Ned Bellavance</a> at https://github.com/ned1313/Implementing-Terraform-on-AWS

Terraform on Azure



VIDEO: Implementing Terraform on Microsoft Azure by Ned Bellavance

  1. In a browser, go to straight to the Azure Cloud Shell:

  2. PROTIP: Azure uses the subscription you last used (based on cookies saved from your previous session). So switch to another browser profile or switch to another Subscription.

    az account list

    “isDefault”: true, means you’re using the default Azure account.

    Alternately, environment variables can also be specified for a Service Principal with a cert/secret hard coded in a file run:

    export ARM_CLIENT_ID="..."
    export ARM_CLIENT_SECRET="..."
    export ARM_SUBSCRIPTION_ID="..."
    export ARM_TENANT_ID="..."

    Alternately, to use a container’s Managed Service Identity (MSI) instead of ARM_CLIENT_SECRET:

    export ARM_USE_MSI=true
    export ARM_SUBSCRIPTION_ID="..."
    export ARM_TENANT_ID="..."
  3. Terraform is pre-installed:

    terraform --version
    Terraform v0.14.10
    Your version of Terraform is out of date! The latest version
    is 0.15.0. You can update by downloading from https://www.terraform.io/downloads.html

    See what is the latest version and details for each release.

  4. Got storage?

  5. Navigate to your default folder:

    cd clouddrive
  6. Use Git (installed by default):

    git clone https://github.com/lukeorellana/terraform-on-azure
    cd terraform-on-azure

    The repo contains these folders:

    • 01-intro
    • 02-init-plan-apply-destroy
    • 03-terraform-state
    • 04-variables
    • 05-modules
    • 06-advanced-hcl

  7. To add the tfstate file specification in .gitignore file:

    echo "terraform.tfstate" >>.gitignore


  8. Use VSCode (installed by default) to view blocks in Terraform HCL file:

    cd ~/clouddrive/terraform-on-azure/02-init-plan-apply-destroy/01-intro
    code main.tf

    NOTE: Each key-value pair is an argument containing an expression of a text value.

    Each HCL file needs to specify the (cloud) provider being used is “azure”.

    NOTE: Multiple providers can be specified in the same HCL file.

  9. Search for “Resource Group” in Terraform’s Azure Provider docs:


    for “azurerm_resource_group”.

  10. Download and install binaries providers need:

    terraform init


    Initializing backends...
    Initializing provider plugins...
           - Finding hashicorp/azurerm versions matching "2.40.0"...
           - Installing hashicorp/azurerm v2.40.0...
           - Installed hashicorp/azurerm v2.40.0 (signed by HashiCorp)
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run "terraform init" in the future.
    Terraform has been successfully initialized!
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.

    Terraform Plan

  11. What if Terrform

    terraform plan
  12. What if Terraform

    terraform apply
  13. Respond to “Enter a Value:”

  14. Verify it worked:

    az group list -o table

    “Environment” = “terraexample”

  15. Done with Terraform

    terraform destroy -auto-approve
  16. Respond to “Enter a Value:”

  17. Navigate to the next example:

    cd ~/clouddrive/terraform-on-azure/02-init-plan-apply-destroy/02-interpolation
    terraform init
    code main.tf
  18. Execute:

    terraform apply -auto-approve

    Remember that Terraform runs which order to run blocks.

  19. This example has output blocks to separate tfstate for the virtual network and each resource group (using interpolation):

    code ~/clouddrive/terraform-on-azure/03-terraform-state/02-remote-state/main.tf

    The output blocks can be moved to a separate output.tf fil.

  20. data.
  21. variables.tf for reusability. Define default values refered as “var.” in:

    code ~/clouddrive/terraform-on-azure/04-variables/02-deployvariables/terraform.tfvars

    Environment variables are referenced as “TF_VAR_XXX”

    A map is a collection of variables, for use in conditional logic.

    An object can contain lists, etc.

  22. 05-Modules passes NSG output


  • Advanced location variable:

    variable "location" {
        type = string
        description = "Azure location (region)"
        default = ""
    resource "azurerm_resource_group" "rg" {
     name = "rg-testcondition"
     location = var.location != "" ? var.location : "westus2"
  • chapter 37 shows use of for_each to specify hub-and-spoke netowrking.

Terraform on Azure documentation index by Microsoft:



  • https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
  • https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/guides/getting-started
  • https://kubernetes.io/blog/2020/06/working-with-terraform-and-kubernetes/
  • https://opensource.com/article/20/7/terraform-kubernetes

HCL (Hashicorp Configuration Language)

Terraform defined HCL (Hashicorp Configuration Language) for both human and machine consumption. HCL is defined at https://github.com/hashicorp/hcl and described at https://www.terraform.io/docs/configuration/syntax.html.

HCL is less verbose than JSON and more concise than YML. *

REMEMBER: The name suffix of files containing JSON “*.tf.json”.

More importantly, unlike JSON and YML, HCL allows annotations (comments). As in bash scripts: single line comments start with # (pound sign) or // (double forward slashes). Multi-line comments are wrapped between /* and */.

\ back-slashes specify continuation of long lines (as in Bash).

The minimal HCL specifies the provider cloud, instance type used to house the AMI, which is specific to a region:

provider "aws" {
   version = ">= 1.2, < 1.2"
   region = "${var.aws_region}"
   access_key = "${var.aws_access_key}"
   ecret_key = "${var.aws_secret_key}"
   resource "aws_instance" "example" {
      ami = "ami-2757f631"
      instance_type = "t2.micro"

“provider” and “resource” are each a configuration block.

Interpolation variables

  • Each block defined between curly braces is called a “stanza”.

  • Variable substitution (interpolation) has a format similar to shell scripts:

image = “${var.aws_region}”

PROTIP: Interpolation allows a single file to be specified for several environments (dev, qa, stage, prod), with a variable file to specify only values unique to each enviornment.

var. above references values defined in file “variables.tf”, which provide the “Enter a value:” prompt when needed:

   variable "aws_access_key" {
      description = "AWS access key"
   variable "aws_secret_key" {
      description = "AWS secret key"
   variable "aws_region" {
      description = "AWS region"

Values are defined in the terraform.tfvars file.

The value for “name” must be unique or an error is thrown.

Values can be interpolated using syntax wrapped in ${}, called interpolation syntax, in the format of ${type.name.attribute}. For example, $\{aws.instance.base.id\} is interpolated to something like i-28978a2. Literal $ are coded by doubling up $$.

Interpolations can contain logic and mathematical operations, such as abs(), replace(string, search, replace).

HCL does not contain conditional if/else logic, which is why modules (described below) are necessary.

HCL2 is the new experimental version that combines the interpolation language HIL to produce a single configuration language that supports arbitrary expressions. It’s not backward compatible, with no direct migration path.

Terraform processes all .tf files in the directory invoked, in alphabetical order.

fmt HCL Coding Conventions

The fmt command reformats HCL files according to rules.

  • A space before and after “=” assignment is not required, but makes for easier reading.

Environment variables

  • Values for variables can be specified at run-time using variables names starting with “TF_VAR_”, such as:


    But unlike other systems, enviornment variables have less precedence than -var-file and -var definitions, followed by automatic variable files.


https://github.com/cloudposse has mostly AWS Terraform, such as https://github.com/cloudposse/load-testing

Gruntwork’s sample

Gruntwork.io offers (for $4,950), access to their 250,000-line Reference Architecture of starter code to create a production-worthy “defense in depth” setup on AWS:


An additional $500 a month gets you access to their Reference Architecture Walktrough video class. But previews of the class is free:

For those without the big bucks, Yevegeniy (Jim) Brikman (ybrikman.com, co-founder of DevOps as a Service Gruntwork.io) has generously shared:

The sample scripts referenced by this tutorial contain moustache variable mark-up so that you can generate a set for your organization.

AWS EC2 Credentials

The above minimal HCL can be in a file named ec2.tf.

PROTIP: Including in tf files AWS credentials such as these might inadvantly made visible to the public by getting checked into a public repository:

access_key = "ACCESS_KEY_HERE"
     secret_key = "SECRET_KEY_HERE"

If you simply leave out AWS credentials, Terraform will automatically search for saved API credentials (for example, in ~/.aws/credentials) or IAM instance profile credentials.

Security Scanners

https://github.com/iacsecurity/tool-compare lists specific tests (of vulnerability) and which products can detect each.

Checkov is an OSS static scanner of Terraform, AWS Cloud Formation, and Azure ARM templates.

Cloudrail from Indeni is a freemium scanner utility which audits Terraform IaC code for security concerns. It calls itself “context-aware” becuase (although Terratest requires that you deploy the infra and run tests against the live infra), Cloudrail takes a hybrid (SAST+DAST) approach - parsing static TF files into a database (of resources in a python object) and “continuously” comparing that against the live infrastructure in a separate python object fetched dynamically using their Dragoneye data collector (for AWS and Azure).

When run on local envrionments, security scanning achieves “shift left”.

Terraform Enterprise TFLint

An important distinction between Cloud Formmation and Terraform is that Terraform tracks the state of each resource.

Terraform Enterprise automatically stores the history of all state revisions. https://www.terraform.io/docs/state

VIDEO: Terraform Enterprise has producers (experts) and read-only consumers. Terraform Enterprise processes HCL with auditing policies like linter https://github.com/terraform-linters/tflint, installed on Windows using choco install tflint. See https://spin.atomicobject.com/2019/09/03/cloud-infrastructure-entr/

[8:25] Terraform Enterprise enforces “policy as code” which automates the application of what CIS (Center for Internet Security) calls (free) “benchmarks” – secure configuration settings for hardening operating systems, for AWS settings at (the 155 page) https://www.cisecurity.org/benchmark/amazon_web_services/.

  • Set to public instead of private?

Terratest from Gruntwork.


Saving tfstate in S3 Backend

In a team environment, it helps to store state state files off a local disk and in a “backend” location central to all.

  1. Using AWS IAM, define a AWS user with Permissions in a Role.
  2. Obtain and save credentials for user in an environment variable.

    VIDEO: Terraform Remote State on Amazon S3 describes use of a file named backend.tf, such as this AWS S3 specification, after substituting “YouOwn” with the (globally unique) S3 bucket name defined with the current AWS credentials:

    terraform {
      backend "s3" {
     bucket = "YouOwn-terraform"
     key    = "terraform.tfstate"
     region = "us-east-1"

    Apply to create tfstate

  3. While in the same folder where there is a “backend.tf” file (above), have Terraform read the above to establish an EC2 instance when given the command:

    tf apply
  4. Confirm by typing “yes”.

    A new file terraform.tfstate is created to save the configuration state.

  5. Manually verify on the AWS Management Console webpage set to service S3.

    Destroy tfstate

  6. While in the same folder where there is a “backend.tf” file (above), have Terraform read the above to establish an EC2 instance when given the command:

    tf destroy
  7. Confirm by typing “yes”.

    The file terraform.tfstate should be deleted.

  8. Manually verify on the AWS Management Console webpage set to service S3.

Validate .tf files

  1. Navigate into the repo and view files in:

    ls single-web-server

    The contents:

    README.md    main.tf     outputs.tf   variables.tf

    This set can be within a sub-module folder.

    variables.tf (vars.tf)

    This file contains a reference to environment variables:

    variable "aws_access_key" {}
    variable "aws_secret_key" {}
    variable "subnet_count" {
      default = 2

    An example of the variables.tf file explained in video: Get started managing a simple application with Terraform February 21, 2018 - by Alexandra White (at Joyant) shows the deployment of the Happy Randomizer app

    variable "image_name" {
      type        = "string"
      description = "The name of the image for the deployment."
      default     = "happy_randomizer"
    variable "image_version" {
      type        = "string"
      description = "The version of the image for the deployment."
      default     = "1.0.0"
    variable "image_type" {
      type        = "string"
      description = "The type of the image for the deployment."
      default     = "lx-dataset"
    variable "package_name" {
      type        = "string"
      description = "The package to use when making a deployment."
      default     = "g4-highcpu-128M"
    variable "service_name" {
      type        = "string"
      description = "The name of the service in CNS."
      default     = "happiness"
    variable "service_networks" {
      type        = "list"
      description = "The name or ID of one or more networks the service will operate on."
      default     = ["Joyent-SDC-Public"]

    In a cluster environment:

    variable "server_port" {
      description = "The port the server will use for HTTP requests"
      default = 8080

    PROTIP: Each input should be defined as a variable.

    Credentials in tfvars

    Define cloud account credentials in a terraform.tfvars file containing sample data:

    aws_access_key = "123456789abcdef123456789"
    aws_secret_key = "Your AWS SecretKey"
    aws_region = "us-east-1"
    aws_accountId = "123456789123456789"
    private_key_path = "C:\\PathToYourPrivateKeys\PrivateKey.pem"

    This is not good security to risk such information in a repo potentially shared.

    tfvars environments

    PROTIP: Separate Terraform configurations by a folder for each environment:

    • base (template for making changes)
    • dev
    • loadtest (performance/stress testing)
    • stage
    • uat (User Acceptance Testing)
    • prod
    • demo (demostration used by salespeople)
    • train (for training users)

  2. Navigate into the base folder.

    PROTIP: Terraform commands act only on the current directory, and does not recurse into sub directories.

  3. View the development.tfvars file:

    environment_tag = "dev"
    tenant_id = "223d"
    billing_code_tag = "DEV12345"
    dns_site_name = "dev-web"
    dns_zone_name = "mycorp.xyz"
    dns_resource_group = "DNS"
    instance_count = "2"
    subnet_count = "2"

    The production.tfvars file usually instead contain more instances and thus subnets that go through a load balancer for auto-scaling:

    environment_tag = "prod"
    tenant_id = "223d"
    billing_code_tag = "PROD12345"
    dns_site_name = "marketing"
    dns_zone_name = "mycorp.com"
    dns_resource_group = "DNS"
    instance_count = "6"
    subnet_count = "3"

    All these would use main_config.tf and variables.tf files commonly used for all environments:

    Tag for cost tracking by codes identifying a particular budget, project, department, etc.

    Defaults and lookup function

    PROTIP: Variables can be assigned multiple default values selected by a lookup function:

    # export AWS_DEFAULT_REGION=xx-yyyy-0
    variable "server_port" {
      description = "The port the server will use for HTTP requests"
      default = 8080
    variable "amis" {
      type = "map”"
      default = {
     us-east-1 = "ami-1234"
     us-west-1 = "ami-5678"
    ami = ${lookup(var.amis, "us-east-1")}

    PROTIP: With AWS EC2, region “us-east-1” must be used as the basis for creating others.

    NOTE: Amazon has an approval process for making AMIs available on the public Amazon Marketplace.


    An example of the main.tf file:

    terraform {
      required_version = ">= 0.8, < 0.9"
    provider "aws" {
      alias = "NorthEast"
      region = "us-east-1"
      access_key = "${var.AWS_ACCESS_KEY}"
      secret_key = "${var.AWS_SECRET_KEY}"
    resource "aws_instance" "web" {
      ami           = "ami-40d28157"
      instance_type = "t2.micro"
      subnet_id     = "subnet-c02a3628"
      vpc_security_group_ids = ["sg-a1fe66aa"]
      tags {
     Identity = "..."

    NOTE: Components of Terraform are: provider, resource, provision.

    “t1.micro” qualifies for the Amazon free tier available to first-year subscribers.

    PROTIP: Vertically aligning values helps to make information easier to find.

    The ami (amazon machine image) identifier is obtained from Amazon’s catalog of public images.

    subnet_id is for the VPC and vpc_security_group_ids array.

    tags_identity is to scope permissions.

    See http://www.antonbabenko.com/2016/09/21/how-i-structure-terraform-configurations.html

    Another example is from the Terransible lab and course

Terraform Providers

Terraform translates HCL into API calls defined in (at last count, 109) cloud provider repositories from Terraform, Inc. at:



### Terraform Built-in Providers

“aws”, “google”, “google-beta”, “azurerm”, “azuread”, “heroku”, Kubernetes, “gitlab”, DigitalOcean, Heroku, GitHub, OpenStack, “cloudscale”, “cloudstack”, “opentelekomcloud”, “oci” (Oracle Cloud Infrastructure), “opc” (Oracle Public Cloud), “oracclepass” (Oracle Platform Cloud), “flexibleengine”, “nsxt”, “rancher”, “rancher2”, (VMware NSX-T), “vcd” (VMware vCloud Director ), “openstack”, “azurestack”, “scaleway”, “UCloud”, “JDcloud”, Joyent Triton, Circonus, NaverCloud, TelefonicaOpenCloud, oneandone, Skytap, etc.

In China: “alicloud”, “huaweicloud”, “tencentcloud”, etc.

Monitoring and other infrastructure services vendors: “datadog”, “grafana”, “newrelic”, “pagerduty”, “bigip” (F5 BigIP), “RabbitMQ”, “acme”, “yandex”, “ciscoasa” (ASA), etc.

CDN vendors: Dyn, “fastly”, “cloudflare”, “netlify”, “packet” (Terraform Packet), “consul” (Terraform Consul), “nutanix”, “ignition”, “dnsimple”, “fortis”, LogicMonitor, “profitbricks”, “statuscake”, etc.

Database and repositories: “influxdb”, “mysql”, “postgresql”, “vault” (Terraform), “bitbucket”, “github”, “archive”, etc.

Servers: “docker”, “dns”, UltraDNS, “helm” (Terraform), “http”, “vsphere” (VMware vSphere), etc.

chef, “spotinst”, “linode”, “hedvig”, “selectel”, “brightbox”, “OVH”, “nomad”, “local”, Panos, NS1, “rundeck”, VMWare vRA7, random, external, “null”, Icinga2, Arukas, runscope, etc.

The follow have been archived: Atlas (Terraform), “clc” (CenturyLinkCloud), OpsGenie, (IBM) SoftLayer, PowerDNS, DNSMadeEasy, Librato, Mailgun, LogEntries, Gridscale, CIDR, etc.


VIDEO INTRO: Terraform now offers a Terraform Cloud provider to manage VCS provider GitHub in temporary test workspaces to see the impact of incremental changes.

Custom Terraform Providers are written in the Go language.

The steps below are based on https://www.terraform.io/intro/examples and implemented in the setup scripts at: https://github.com/wilsonmar/mac-setup which performs the following steps for you:

  1. Install a Git client if you haven’t already.
  2. Use an internet browser (Chrome) to see the sample assets at:


  3. If you are going to make changes, click the Fork button.
  4. Create or navigate to a container folder where new repositories are added. For example:


  5. Get the repo onto your laptop (substituting “wilsonmar” with your own account name):

    git clone https://github.com/terraform-providers/terraform-provider-aws.git --depth=1 && cd tf-sample

    The above is one line, but may be word-wrapped on your screen.

    The response at time of writing:

    Cloning into 'tf-sample'...
    remote: Counting objects: 12, done.
    remote: Compressing objects: 100% (12/12), done.
    remote: Total 12 (delta 1), reused 9 (delta 0), pack-reused 0
    Unpacking objects: 100% (12/12), done.
  6. PROTIP: Make sure that the AWS region is what you want.

    https://www.terraform.io/docs/providers/aws/r/instance.html AWS provider

    VPC Security Group

  7. VPC Security group

    The example in Gruntwork’s intro-to-terraform also specifies the vpc security group:

    resource "aws_instance" "example" {
      \# Ubuntu Server 14.04 LTS (HVM), SSD Volume Type in us-east-1
      ami = "ami-2d39803a"
      instance_type = "t2.micro"
      vpc_security_group_ids = ["${aws_security_group.instance.id}"]
      user_data = <<-EOF
               echo "Hello, World" > index.html
               nohup busybox httpd -f -p "${var.server_port}" &
      tags {
     Name = "ubuntu.t2.hello.01"
    resource "aws_security_group" "instance" {
      name = "terraform-example-instance"
      \# Inbound HTTP from anywhere:
      ingress {
     from_port = "${var.server_port}"
     to_port = "${var.server_port}"
     protocol = "tcp"
     cidr_blocks = [""]

    The “var.server_port” is defined in variables file:

    The tag value AWS uses to name the EC2 instance.

    Execution control

    Terraform automatically detects and enforces rule violations, such as use of rogue port numbers other than 80/443.


    Sample contents of an outputs.tf file:

      output "public_ip" {
      value = "${aws_instance.example.public_ip}"
      output "url" {
      value = "http://${aws_instance.example.public_ip}:${var.port}"

    Sample contents of an outputs.tf file for a cluster points to the Elastic Load Balancer:

    output "elb_dns_name" {
      value = "${aws_elb.example.dns_name}"



    As with Java and other programming code, Terraform coding should be tested too.

    Gruntwork has an open-source library to setup and tear down conditions for verifying whether servers created by Terraform actually work.

    https://github.com/gruntwork-io/terratest is a Go library that makes it easier to write automated tests for your infrastructure code. It’s written in Go that uses Packer, ssh, and other commands to automate experimentation and to collect results (impact of) various configuration changes.

    Quick Start Terratest


terraform validate

  1. Validate the folder (see https://www.terraform.io/docs/commands/validate.html)

    terraform validate single-web-server

    If no issues are identified, no message appears. (no news is good news)

  2. Add a pre-commit hook to validate in your Git repository


    PROTIP: There should be only one main.tf per folder.

    Plug-in Initialization

    Cloud providers are not included with the installer, so…

  3. In your gits folder:

    git clone https://github.com/terraform-providers/terraform-provider-aws.git --depth=1
  4. Initialize Terraform working directory (like git init) plug-ins:

    terraform init

    Sample response:

    Initializing provider plugins...
           - Checking for available provider plugins on https://releases.hashicorp.com...
           - Downloading plugin for provider "aws" (1.17.0)...
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
           * provider.aws: version = "~> 1.17"
    Terraform has been successfully initialized!
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.

    See https://www.terraform.io/docs/commands/init.html

    This creates a hidden .terraform\plugins" folder path containing a folder for your os - darwin_amd64` for MacOS.


When a resource is initially created, provisioners can be executed to initialize that resource.

Provisioner definitions define the properties of each resource, such as initialization commands. For example, this installs an nginx web server and displays a minimal HTML page:

Provisioner configurations are also plugins.

provisioner "remote-exec" {
  inline = [
    "sudo yum install nginx -y",
    "sudo service nginx start",
    "echo "<html><head><title>NGINX server</title></head><body style=\"background-color"></body></html>"


### local-exec provisioner

NOTE: Software can be specified for installation using Packer’s local-exec provisioner which executes commands on host machines . For example, on a Ubuntu machine:

resource "null_resource" "local-software" {
  provisioner "local-exec" {
    command = <<EOH
sudo apt-get update
sudo apt-get install -y ansible

NOTE: apt-get is in-built within Ubuntu Linux distributions.

PROTIP: Use this to bootstrap automation such as assigning permissions and running Ansible or PowerShell DSC, then use DSC scripts for more flexibility and easier debugging.


To invoke the command to run Ansible playbook.yml:

provisioner "local-exec" {
   command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
   -u {var.user} -i '${self.ipv4_address},' \
   --private-key ${var.ssh_private_key} playbook.yml"}

The key component is ${self.ipv4_address} variable. After provisioning the machine, Terraform knows its IP address. And we need to pass an IP address for Ansible. Therefore, we are using the built-in Terraform variable as input for Ansible.

Another option is to run Terraform and Ansible separately but import the data from one to another. Terraform saves all the information about provisioned resources into a Terraform state file. We can find the IP addresses of Terraform-provisioned instances there and import them into the Ansible inventory file.

<a target=”_blank” https://github.com/adammck/terraform-inventory”>Terraform Inventory</a> extract from the state file the IP addresses for use by ab Ansible playbook to configure nodes.

Ansible can use hash_vault to retrieve secrets from a Hashicorp Vault.


  • https://www.hashicorp.com/resources/ansible-terraform-better-together
  • https://www.digitalocean.com/community/tutorials/how-to-use-ansible-with-terraform-for-configuration-management

NOTE: Ansible Tower cannot be used with Terraform.

CIDR Subnet function

variable network_info {
   default = “” #type, default, description
cidr_block = ${cidrsubnet(var.network_info, 8, 1)} # returns
cidr_block = ${cidrsubnet(var.network_info, 8, 2)} # returns


variable network_info {
   default = “” #type, default, description
cidr_block = ${cidrsubnet(var.network_info, 8, 1)} # returns
cidr_block = ${cidrsubnet(var.network_info, 8, 2)} # returns

In this example terraform.tfvars file are credentials for both AWS EC2 and Azure ARM providers:

bucket_name = "mycompany-sys1-v1"
arm_subscription_id = "???"
arm_principal = "???"
arm_passsord = "???"
tenant_id = "223d"
aws_access_key = "insert access key here>"
aws_secret_key = "insert secret key here"
private_key_path = "C:\\MyKeys1.pem"

The private_key_path should be a full path, containing \\ so that the single slash is not interpreted as a special character.

bucket_name must be globally unique within all of the AWS provider customers.

### Terraforming AWS Configuration

PROTIP: Install from https://github.com/dtan4/terraforming a Ruby script that enables a command such as:

terraforming s3 --profile dev

You can pass profile name by –profile option.


outputs.tf file example:

output "aws_elb_public_dns" {
  value = "${aws_elb.web.dns_name}"
output "public_ip" {
  value = "${aws_instance.example.public_ip}"
output "azure_rm_dns_cname" {
  value = "${azurerm_dns_cname_record.elb.id}"
  1. PROTIP: If the AMI is no longer available, you will get an error message.

Terraform Plan

  1. Have Terrform evaluate based on vars in a different (parent) folder:

    terraform plan \
       -var 'site_name=demo.example.com' \
       -var-file='..\terraform.tfvars' \
       -var-file='.\Development\development.tfvars' \
       -state='.\Development\dev.state' \
       -out base-`date-+'%s'`.plan

    The -var parameter specifies a value for var.site_name variable.

    The two dots in the command specifies to look above the current folder.

    The -out parameter specifies the output file name. Since the output of terraform plan is fed into the terraform apply command, a static file name is best. However, some prefer to avoid overwriting by automatically using a different date stamp in the file name.

    The “%s” yields a date stamp like 147772345 which is the numer of seconds since the 1/1/1970 epoch.

    A sample response:

    "<computered>" means Terraform figures it out.

    Pluses and minuses flag additions and deletions. This is a key differentiator for Terraform as a “”

    Terraform creates a dependency graph (specfically, a Directed Acyclic Graph). This is so that nodes are built in the order they are needed.

    Terraform apply

  2. Type:

    terraform apply “happy.plan”


    terraform apply -state=”.\develop\dev.state” \ -var=”environment_name=development”

    Alternative specification of environment variable:

    TF_VAR_first_name="John" terraform apply

    Values to Terraform variables define inputs such as run-time DNS/IP addresses into Terraform modules.

    What apply does:

    1. Generate model from logical definition (the Desired State).
    2. Load current model (preliminary source data).
    3. Refresh current state model by querying remote provider (final source state).
    4. Calculate difference from source state to target state (plan).
    5. Apply plan.

    NOTE: Built-in functions: https://terraform.io/docs/configuration/interpolation.html

    Sample response from terraform apply:

    dns_names = [
    primaryIp = [

State management

BLOG: Yevgeniy Brikman (Gruntwork) “How to manage Terraform state”

Although AWS manages state with CloudFormation, to be cloud-agnostic, Terraform users needs to manage state (using Terraform features).

terraform apply generates .tfstate files (containing JSON) to persist the state of runs by mapping resource IDs to their data. In addition to the terraform program version, the file contains a serial number to increment every time the file itself changes.

PROTIP: CAUTION: tfstate files can contain secrets, so .gitignore and delete them before git add.

  1. In the .gitignore file are files generated during processing, so don’t need to persist in a repository:


    tfstate.backup is created from the most recent previous execution before the current tfstate file contents.

    .terraform/ specifies that the folder is ignored when pushing to GitHub.

    Terraform apply creates a dev.state.lock.info file as a way to signal to other processes to stay away while changes to the environment are underway.

    Remote state

    NOTE terraform.tfstate can be stored over the network in S3, etcd distributed key value store (used by Kubernetes), or a Hashicorp Atlas or Consul server. (Hashicorp Atlas is a licensed solution.)

    State can be obtained using command:

    terraform remote pull


    Terraform manages state through several backends:

    • local (the default)
    • etcd (distributed key value store used by Kubernetes)

    • gcs
    • azurerm
    • artifactory
    • manta
    • s3
    • swift

    • consul (Hashicorp product)
    • atlas (Hashicorp product)
    • terraform enterprise

    Output variables

  2. Output Terraform variable:

    output "loadbalancer_dns_name" {
      value = "${aws_elb.loadbalancer.dns_name}"

    Processing flags

    HCL can contain flags that affect processing. For example, within a resource specification, force_destroy = true forces the provider to delete the resource when done.

    Verify websites

  3. The website accessible?

  4. In the provider’s console (EC2), verify

    Destroy to clean up

  5. Destroy instances so they don’t rack up charges unproductively:

    terraform destroy

    PROTIP: At time of this writing, Amazon charges for Windows instances by the hour while it charges for Linux by the minute, as other cloud providers do.

  6. Verify in the provider’s console (aws.amazon.com)

Plugins into Terraform

All Terraform providers are plugins - multi-process RPC (Remote Procedure Calls).



Terraform expect plugins to follow a very specific naming convention of terraform-TYPE-NAME. For example, terraform-provider-aws, which tells Terraform that the plugin is a provider that can be referenced as “aws”.

PROTIP: Establish a standard for where plugins are located:

For *nix systems, ~/.terraformrc

For Windows, %APPDATA%/terraform.rc


PROTIP: When writing your own terraform plugin, create a new Go project in GitHub, then locally use a directory structure:


where USERNAME is your GitHub username and NAME is the name of the plugin you’re developing. This structure is what Go expects and simplifies things down the road.


  • Grafana or Kibana monitoring
  • PagerDuty alerts
  • DataDog metrics


Crossplane.io provides more flexible ways to interact with Kubernetes than Terraform. Their github.com/crossplane has providers for AWS, Azure, and GCP.

Densify FinOps

densify.com dynamically self-optimizes configurations based on predictive analytics. This “FinOps” works by updating tags in AWS of recommendations for server type based on cost and performance analysis in real-time:

vm_size = "${module.densify.instance_type}"

VIDEO: densify-real-time-807x261

It’s defined in terraform.tf:

module "densify" {
  source = "densify-dev/optimization-as-code/null"
  version = "1.0.0"
  densify_recommendations = "${var.densify_recommendations}"
  densify_fallback        = "${var.densify_fallback}"
  densify_unique_id       = "${var.name}"


A Terraform module is a container for multiple resources that are used together.

Terraform modules provide “blueprints” to deploy.

The module’s source can be on a local disk:

module "service_foo" {
  source = "/modules/microservice"
  image_id = "ami-12345"
  num_instances = 3

The source can be from a GitHub repo such as https://github.com/objectpartners/tf-modules

module "rancher" {
  source = "github.com/objectpartners/tf-modules//rancher/server-standalone-elb-db&ref=9b2e590"
  • Notice “https://” are not part of the source string. It’s assumed.
  • Double slashes in the URL above separate the repo from the subdirectory.
  • PROTIP: The ref above is the first 7 hex digits of a commit SHA hash ID. Alternately, semantic version tag value (such as “v1.2.3”) can be specified. This is a key enabler for immutable strategy.

https://registry.terraform.io is hosted by Terraform to provide a marketplace of modules.

https://registry.terraform.io/modules/hashicorp/vault module installs Hashicorp’s own Vault and Consul on AWS EC2, Azure, GCP.

Video of demo by Yevgeniy Brikman: terraform-mod-vaults-640x114-16475.jpg

The above is created by making use of https://github.com/hashicorp/terraform-aws-vault stored as sub-folder hashicorp/vault/aws

terraform init hashicorp/vault/aws
   terraform apply

It’s got 33 resources. The sub-modules are:

  • private-tls-cert (for all providers)
  • vault-cluster (for all providers)
  • vault-lb-fr (for Google only)
  • vault-elb (for AWS only)
  • vault-security-group-rules (for AWS only)

Rock Stars

Here are people who have taken time to create tutorials for us about Terraform:

Ned Bellavance (@ned1313 MS MVP at nerdinthecloud.com) has several video classes on Pluralsight [subscription]:

Derek Morgan in May 2018 released video courses on LinuxAcademy.com:

Dave Cohen in April 2018 made a 5 hands-on videos using Digital Ocean Personal Access Token (PAT).

Seth Vargo, Director of Evangelism at HashiCorp, gave a deep-dive hands-on introduction to Terraform at the O’Reilly conference on June 20-23, 2016. If you have a SafaribooksOnline subscription, see the videos: Part 1 [48:17], Part 2 [37:53]

Saurav Sharma created a YouTube Playlist that references code at https://github.com/Cloud-Yeti/aws-labs as starters for website of videos and on Udemy.

Yevgeniy (Jim) Brikman (ybrikman.com), co-founder of DevOps as a Service Gruntwork.io

zero-downtime deployment, are hard to express in purely declarative terms.

Comprehensive Guide to Terraform includes:

James Turnbull

Jason Asse

Nick Colyer (Skylines Academy)

Kirill Shirinkin

James Nugent

  • Engineer at Hashicorp

Anton Babenko (github.com/antonbabenko linkedin)


Kyle Rockman (@Rocktavious, author of Jenkins Pipelines and github.com/rocktavious) presented at HashiConf17 (slides) a self-service app to use Terraform (powered by React+Redux using Jinga2 to Gunicorn + Djanjo back end running HA in AWS) that he hopes to open-source at github.com/underarmour

Others (YouTube videos):


PDF: Hashicorp’;’s Cloud Operating Model whitepaper

VIDEO: Learn Terraform in 10 Minutes Tutorial by Reval Govender

VIDEO: SignalWarrant’s videos on PowerShell by David Keith Hall includes:

Terraform Basics mini-course on YouTube in 5-parts from “tutorialLinux”.

http://chevalpartners.com/devops-infrastructure-as-code-on-azure-platform-with-hashicorp-terraform-part-1/ quotes https://www.hashicorp.com/blog/azure-resource-manager-support-for-packer-and-terraform from 2016 about support for Azure Resource Manager.

Sajith Venkit explains Terraform file exampled in his “Building Docker Enterprise 2.1 Cluster Using Terraform” blog and repo for AliCloud and Azure.

AWS Cloudformation vs Terraform: Prepare for DevOps/ Cloud Engineer Interview

How to create a GitOps workflow with Terraform and Jenkins by Alex Podobnik

VIDEO: Manage SSH with HashiCorp Vault

github.com/dod-iac (DOD Infrastructure as Code) is 36 examples of how the Pentagon uses Terraform within AWS IAM, S3, EBS, KMS, Kinesis api gateway, Lambda, MFA, GuardDuty, Route53, etc.

VIDEO: Terraform Provider Azure.gov for standardized templates across clouds.


2 hr. VIDEO: Terraform for DevOps Beginners + Labs by Vijin Palazhi.

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options
  18. AWS Load Balancers

  19. Cloud services comparisons (across vendors)
  20. Cloud regions (across vendors)
  21. AWS Virtual Private Cloud

  22. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  23. Azure Certifications
  24. Azure Cloud

  25. Azure Cloud Powershell
  26. Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
  27. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  28. Azure Networking
  29. Azure Storage
  30. Azure Compute
  31. Azure Monitoring

  32. Digital Ocean
  33. Cloud Foundry

  34. Packer automation to build Vagrant images
  35. Terraform multi-cloud provisioning automation
  36. Hashicorp Vault and Consul to generate and hold secrets

  37. Powershell Ecosystem
  38. Powershell on MacOS
  39. Powershell Desired System Configuration

  40. Jenkins Server Setup
  41. Jenkins Plug-ins
  42. Jenkins Freestyle jobs
  43. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  44. Docker (Glossary, Ecosystem, Certification)
  45. Make Makefile for Docker
  46. Docker Setup and run Bash shell script
  47. Bash coding
  48. Docker Setup
  49. Dockerize apps
  50. Docker Registry

  51. Maven on MacOSX

  52. Ansible

  53. MySQL Setup

  54. SonarQube & SonarSource static code scan

  55. API Management Microsoft
  56. API Management Amazon

  57. Scenarios for load
  58. Chaos Engineering