Wilson Mar bio photo

Wilson Mar

Hello. Hire me!

Email me Calendar Skype call 310 320-7878

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

Client-only immutable multi-cloud provisioning, with open-sourced Enterprise support


This tutorial is a step-by-step hands-on deep yet succinct introduction to using Hashicorp’s Terraform to build, change, and version clusters of immutable servers (through load balancers) running in clouds using idempotent declarative specifications.

This integrates examples and wisdom from videos and blogs by “rock stars” working in various organizations.

Infrastructure as Code Competition

Like AWS Cloud Formation, Terraform automation saves money by automating the configuration of servers, which is quicker and more consistent than manually clicking through the GUI.

The difference between Chef, Puppet, Ansible, SaltStack, AWS CloudFormation, and Terraform:

terraform-comp-colored-650x261-36439(Click to pop-up full screen image colorized from Gruntwork’s blog)

Terraform’s advantage over Amazon’s Cloud Formation scripts is that Terraform can also provision on-premises servers running OpenStack as well as AWS, Azure, Google Cloud, Digitial Ocean, Fastly, and other cloud providers – “anything with an API”.

Terraform makes infrastructure provisioning Repeatable. Versioned. Documented. Automated. Testable. Shareable.

Terraform and Ansible can work in unison and complement each other. Terraform can bootstrap the underlying cloud infrastructure and then Ansible provisions the user space. To test a service on a dedicated server, skip using Terraform and run the Ansible playbook on that machine. Linux Academy has a “Deploy to AWS with Ansible and Terraform” video class by Derek Morgan who shows how to do just that, with code and diagram.

“Immutable” means once instantiated, it doesn’t change. In DevOps, this strategy means individual servers are treated like “cattle” (removed from the herd) and not as “pets” (courageously kept alive as long as possible). When I make a mistake in a complicated setup, I can get going again quickly and easily with less troubleshooting because I can just re-run the script.


Feature CloudFormation Terraform
Source code closed-source open source
Open Source contributions? No Yes (GitHub issues)
Configuration format JSON HCL JSON
State management JSON HCL JSON
Cloud Providers support AWS only AWS, GCE, Azure (20+)
Execution control No Yes
Iterations No Yes
Manage already created resources No Yes (hard)
Failure handling Optional rollback Fix & retry
Logical comparisons No Limited
Extensible Modules No Yes

Terraform also provides parallel execution control, iterations, and (perhaps most of all) management of resources already created (desired state configuration) over several cloud providers (not just AWS).

A key differentiator is Terraform’s plan command, which provides more than just a “dry-run” before configurations are applied for real. Under the covers, the plan command generates an executable, and uses it to apply, which guarantees the plan is the same as the apply.

Websites to know


Paid Pro and Premium licenses of Terraform add version control integration, MFA security, and other enterprise features.


PROTIP: Terraform is written in the Go language, so there is no JVM to download.

Bootstraping options

Terraform can work with :



https://github.com/migibert/terraform-role Ansible role to install Terraform on Linux machines

https://github.com/hashicorp/docker-hub-images/tree/master/terraform builds Docker containers for using the terraform command line program.

Install on MacOS

  1. MEH: If you plan on frequently switching among several versions installed of Terraform, one alternative is:

    brew install tfenv

    The response at time of writing:

    ==> Downloading https://github.com/kamatama41/tfenv/archive/v0.6.0.tar.gz
    ==> Downloading from https://codeload.github.com/Zordrak/tfenv/tar.gz/v0.6.0
    ######################################################################## 100.0%
    🍺  /usr/local/Cellar/tfenv/0.6.0: 19 files, 23.5KB, built in 6 seconds

    Source for this is from https://github.com/Zordrak/tfenv (previously from https://github.com/kamatama41/tfenv)

    Alas, I don’t recommend it because when I tried to install the latest version using tfenv:

    tfenv install latest

    The error message I got was:

    [INFO] Installing Terraform v0.12.0
    [INFO] Downloading release tarball from https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_darwin_amd64.zip
    curl: (22) The requested URL returned error: 403 
    tfenv: tfenv-install: [ERROR] Tarball download failed

    When tfenv is used, do not install from the website or using :

    brew install terraform

    PROTIP: The installer is for a specific version of MacOS (such as High Sierra):

    ==> Downloading https://homebrew.bintray.com/bottles/terraform-0.11.10.high_sierra.bottle.1.tar.gz
    Already downloaded: /Users/wilsonmar/Library/Caches/Homebrew/downloads/00744f3d03e5309d7548edd315f26202944b1594b0f98017fba6b7e12b191a90--terraform-0.11.10.high_sierra.bottle.1.tar.gz
    ==> Pouring terraform-0.11.10.high_sierra.bottle.1.tar.gz
    🍺  /usr/local/Cellar/terraform/0.11.10: 6 files, 102.1MB

    PROTIP: This creates folder .terraform.d on your $HOME folder, containing files checkpoint_cache and checkpoint_signature

  2. Proceed to Get sample Terraform scripts.

Install on Windows

  1. In a Run command window as Administrator.
  2. Install Chocolatey cmd:
  3. Install Terraform using Chocolatey:

    choco install terraform -y

    The response at time of writing:

    Chocolatey v0.10.8
    Installing the following packages:
    By installing you accept licenses for the packages.
    Progress: Downloading terraform 0.10.6... 100%
    terraform v0.10.6 [Approved]
    terraform package files install completed. Performing other installation steps.
    The package terraform wants to run 'chocolateyInstall.ps1'.
    Note: If you don't run this script, the installation will fail.
    Note: To confirm automatically next time, use '-y' or consider:
    choco feature enable -n allowGlobalConfirmation
    Do you want to run the script?([Y]es/[N]o/[P]rint): y
    Removing old terraform plugins
    Downloading terraform 64 bit
      from 'https://releases.hashicorp.com/terraform/0.10.6/terraform_0.10.6_windows_amd64.zip'
    Progress: 100% - Completed download of C:\Users\vagrant\AppData\Local\Temp\chocolatey\terraform\0.10.6\terraform_0.10.6_windows_amd64.zip (12.89 MB).
    Download of terraform_0.10.6_windows_amd64.zip (12.89 MB) completed.
    Hashes match.
    Extracting C:\Users\vagrant\AppData\Local\Temp\chocolatey\terraform\0.10.6\terraform_0.10.6_windows_amd64.zip to C:\ProgramData\chocolatey\lib\terraform\tools...
     ShimGen has successfully created a shim for terraform.exe
     The install of terraform was successful.
      Software installed to 'C:\ProgramData\chocolatey\lib\terraform\tools'
    Chocolatey installed 1/1 packages.
     See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
  4. Proceed to Get sample Terraform scripts.

Install on Ubuntu

  1. On a Console:

    sudo curl -O https://releases.hashicorp.com/terraform/0.11.5/terraform_0.11.5_linux_amd64.zip
    sudo apt-get install unzip
    sudo mkdir /bin/terraform 
    sudo unzip terraform_0.11.5_linux_amd64.zip -d /usr/local/bin/
  2. Proceed to Get sample Terraform scripts.

Docker Install:

  1. To install Docker CE:

    sudo apt-get update
    sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    sudo apt-get update
    sudo apt-get install docker-ce
  2. Proceed to Get sample Terraform scripts.


Version verify

  1. Obtain version installed, to check if it’s working:

    terraform version


    terraform --version

    WARNING: The response at time of writing, Terraform is not even “1.0.0” release, meaning it’s in beta maturity:

    Terraform v0.11.10

    Commands list & help

  2. For a list of commands:


    The response at time of writing:

    Usage: terraform [--version] [--help] <command> [args]
    The available commands for execution are listed below.
    The most common, useful commands are shown first, followed by
    less common or more advanced commands. If you're just getting
    started with Terraform, stick with the common commands. For the
    other commands, please read the help and docs before usage.
    Common commands:
     apply              Builds or changes infrastructure
     console            Interactive console for Terraform interpolations
     destroy            Destroy Terraform-managed infrastructure
     env                Workspace management
     fmt                Rewrites config files to canonical format
     get                Download and install modules for the configuration
     graph              Create a visual graph of Terraform resources
     import             Import existing infrastructure into Terraform
     init               Initialize a Terraform working directory
     output             Read an output from a state file
     plan               Generate and show an execution plan
     providers          Prints a tree of the providers used in the configuration
     push               Upload this Terraform module to Atlas to run
     refresh            Update local state file against real resources
     show               Inspect Terraform state or plan
     taint              Manually mark a resource for recreation
     untaint            Manually unmark a resource as tainted
     validate           Validates the Terraform files
     version            Prints the Terraform version
     workspace          Workspace management
    All other commands:
     debug              Debug output management (experimental)
     force-unlock       Manually unlock the terraform state
     state              Advanced state management
  3. Help on a specific command, for example:

    terraform plan --help

    Terraform Console

  4. Open the Terraform Console (REPL) from a Terminal/command shell:

    terraform console

    The response is the prompt


  5. Commands can interpret numbers:


    The response is (because counting begins from zero):

    1:3: element: expected 2 arguments, got 3 in:
  6. Type exit to return to Bash Terminal window.


NOTE: Automating infrastructure deployment consists of these features:

  • Provisioning resources
  • Planning updates
  • Using source control
  • Reusing templates

PROTIP: Terrform files are “idempotent” (repeat runs don’t change anything if nothing is changed). Thus Terraform defines the “desired state configuration”.

NOTE: Terraform remote configures remote state storage with Terraform.

Provider credentials

Since the point of Terraform is to get you into clouds, Terraform looks for specific environment variables containing AWS credentials. Many Mac users add credentials in their ~/.bash_profile these lines:

export AWS_ACCESS_KEY_ID=(your access key id)
export AWS_SECRET_ACCESS_KEY=(your secret access key)

For Azure:


For Google Cloud:


Sample Terraform scripts

Terraform’s sample

The steps below are based on https://www.terraform.io/intro/examples and implemented in the setup scripts at: https://github.com/wilsonmar/mac-setup which performs the following steps for you:

  1. Install a Git client if you haven’t already.
  2. Use an internet browser (Chrome) to see the sample assets at:


  3. If you are going to make changes, click the Fork button.
  4. Create or navigate to a container folder where new repositories are added. For example:


  5. Get the repo onto your laptop (substituting “wilsonmar” with your own account name):

    git clone https://github.com/terraform-providers/terraform-provider-aws.git --depth=1 && cd tf-sample

    The above is one line, but may be word-wrapped on your screen.

    The response at time of writing:

    Cloning into 'tf-sample'...
    remote: Counting objects: 12, done.
    remote: Compressing objects: 100% (12/12), done.
    remote: Total 12 (delta 1), reused 9 (delta 0), pack-reused 0
    Unpacking objects: 100% (12/12), done.

Gruntwork’s sample

Gruntwork.io offers (for $4,950), access to their 250,000-line Reference Architecture of starter code to create a production-worthy “defense in depth” setup on AWS:


An additional $500 a month gets you access to their Reference Architecture Walktrough video class. But previews of the class is free:

For those without the big bucks, Yevegeniy (Jim) Brikman (ybrikman.com, co-founder of DevOps as a Service Gruntwork.io) has generously shared:

The sample scripts referenced by this tutorial contain moustache variable mark-up so that you can generate a set for your organization.

Terragrunt from Gruntwork

Gruntwork has open-sourced its https://github.com/gruntwork-io/terragrunt executables which is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules, and managing remote state.

The executable can be installed on macOS using brew install terragrunt

Using it means you henceforth run the terragrunt executable instead of terraform:

  • terragrunt get
  • terragrunt plan
  • terragrunt apply
  • terragrunt output
  • terragrunt destroy

The program also expects an additional top level in all .tfvars files:

terragrunt = {
     # (put your Terragrunt configuration here)

Unlike Terraform, Terragrunt can configure remote state, locking, extra arguments, and lots more.

HCL (Hashicorp Configuration Language)

Terraform defined HCL (Hashicorp Configuration Language) for both human and machine consumption. HCL is defined at https://github.com/hashicorp/hcl and described at https://www.terraform.io/docs/configuration/syntax.html.

HCL is less verbose than JSON and more concise than YML.</a> Unlike JSON and YML, HCL allows annotations as in bash scripts:
single line comments start with # (pound sign) or double forward slashes. Multi-line comments are wrapped between /
and */.

Values can be interpolated using syntax wrapped in ${}, called interpolation syntax, in the format of ${type.name.attribute}. Literal $ are coded by doubling up $$. For example, ${aws.instance.base.id} is interpolated to something like i-28978a2.

Each block with the curly braces is called a “stanza”.

Back-slashes specify continuation (as in Bash).

More importantly, tf files are declarative, meaning that they define the desired end-state (outcomes). If 15 servers are declared, Terraform automatically adds or removes servers to end up with 15 servers rather than specifying procedures to add 5 servers.

Terraform can do that because Terraform knows how many servers it has setup already. It tracks the state.

HCL does not have conditional if/else logic, which is why modules (described below) are necessary.

HCL2 combines the interpolation language HIL to produce a single configuration language that supports arbitrary expressions. It’s not backward compatible, with no direct migration path.

Validate .tf files

  1. Navigate into the repo and view files in:

    ls single-web-server

    The contents:

    README.md    main.tf     outputs.tf   variables.tf

    This set can be within a sub-module folder.

    Credentials in tfvars

    Define cloud account credentials in a terraform.tfvars file containing;

    aws_access_key = "YourAWSAccessKey"
    aws_secret_key = "YourAWSSecretKey"
    private_key_path = "C:\\PathToYourPrivateKeys\PrivateKey.pem"
    accountId = "YourAWSAccountID"

    This is not good security to risk such information in a repo potentially shared.

    tfvars environments

    PROTIP: Separate Terraform configurations by a folder for each environment.

    • base (template for making changes)
    • dev
    • loadtest (performance/stress testing)
    • stage
    • uat (User Acceptance Testing)
    • prod
    • demo (demostration used by salespeople)
    • train (for training users)

  2. Navigate into the base folder.

    PROTIP: Terraform commands act only on the current directory, and does not recurse into sub directories.

  3. View the development.tfvars file:

    environment_tag = "dev"
    tenant_id = "223d"
    billing_code_tag = "DEV12345"
    dns_site_name = "dev-web"
    dns_zone_name = "mycorp.xyz"
    dns_resource_group = "DNS"
    instance_count = "2"
    subnet_count = "2"

    The production.tfvars file usually instead contain more instances and thus subnets that go through a load balancer for auto-scaling:

    environment_tag = "prod"
    tenant_id = "223d"
    billing_code_tag = "PROD12345"
    dns_site_name = "marketing"
    dns_zone_name = "mycorp.com"
    dns_resource_group = "DNS"
    instance_count = "6"
    subnet_count = "3"

    All these would use main_config.tf and variables.tf files commonly used for all environments:

    Tag for cost tracking by codes identifying a particular budget, project, department, etc.

    variables.tf (vars.tf)

    This file contains a reference to environment variables:

    variable "aws_access_key" {}
    variable "aws_secret_key" {}
    variable "subnet_count" {
      default = 2

    An example of the variables.tf file explained in video: Get started managing a simple application with Terraform February 21, 2018 - by Alexandra White (at Joyant) shows the deployment of the Happy Randomizer app

    variable "image_name" {
      type        = "string"
      description = "The name of the image for the deployment."
      default     = "happy_randomizer"
    variable "image_version" {
      type        = "string"
      description = "The version of the image for the deployment."
      default     = "1.0.0"
    variable "image_type" {
      type        = "string"
      description = "The type of the image for the deployment."
      default     = "lx-dataset"
    variable "package_name" {
      type        = "string"
      description = "The package to use when making a deployment."
      default     = "g4-highcpu-128M"
    variable "service_name" {
      type        = "string"
      description = "The name of the service in CNS."
      default     = "happiness"
    variable "service_networks" {
      type        = "list"
      description = "The name or ID of one or more networks the service will operate on."
      default     = ["Joyent-SDC-Public"]

    In a cluster enviornment:

    variable "server_port" {
      description = "The port the server will use for HTTP requests"
      default = 8080

    PROTIP: Each input should be defined as a variable.

    Defaults and lookup function

    PROTIP: Variables can be assigned multiple default values selected by a lookup function:

    # export AWS_DEFAULT_REGION=xx-yyyy-0
    variable "server_port" {
      description = "The port the server will use for HTTP requests"
      default = 8080
    variable "amis" {
      type = "map”"
      default = {
     us-east-1 = "ami-1234"
     us-west-1 = "ami-5678"
    ami = ${lookup(var.amis, "us-east-1")}

    PROTIP: With AWS EC2, region “us-east-1” must be used as the basis for creating others.

    NOTE: Amazon has an approval process for making AMIs available on the public Amazon Marketplace.


    An example of the main.tf file:

    terraform {
      required_version = ">= 0.8, < 0.9"
    provider "aws" {
      alias = "NorthEast"
      region = "us-east-1"
      access_key = "${var.AWS_ACCESS_KEY}"
      secret_key = "${var.AWS_SECRET_KEY}"
    resource "aws_instance" "web" {
      ami           = "ami-40d28157"
      instance_type = "t2.micro"
      subnet_id     = "subnet-c02a3628"
      vpc_security_group_ids = ["sg-a1fe66aa"]
      tags {
     Identity = "..."

    NOTE: Components of Terraform are: provider, resource, provision.

    “t1.micro” qualifies for the Amazon free tier available to first-year subscribers.

    PROTIP: Vertically aligning values helps to make information easier to find.

    The ami (amazon machine image) identifier is obtained from Amazon’s catalog of public images.

    subnet_id is for the VPC and vpc_security_group_ids array.

    tags_identity is to scope permissions.

    See http://www.antonbabenko.com/2016/09/21/how-i-structure-terraform-configurations.html

    Cloud Providers

    Terraform translates HCL into API calls to cloud providers listed at

    “aws”, “google”, “azure”, Kubernetes, GitLab, DigitalOcean, Heroku, GitHub, OpenStack, etc.


  4. PROTIP: Make sure that the AWS region is what you want.

    https://www.terraform.io/docs/providers/aws/r/instance.html AWS provider

  5. VPC Security group

    The example in Gruntwork’s intro-to-terraform also specifies the vpc security group:

    resource "aws_instance" "example" {
      \# Ubuntu Server 14.04 LTS (HVM), SSD Volume Type in us-east-1
      ami = "ami-2d39803a"
      instance_type = "t2.micro"
      vpc_security_group_ids = ["${aws_security_group.instance.id}"]
      user_data = <<-EOF
               echo "Hello, World" > index.html
               nohup busybox httpd -f -p "${var.server_port}" &
      tags {
     Name = "ubuntu.t2.hello.01"
    resource "aws_security_group" "instance" {
      name = "terraform-example-instance"
      \# Inbound HTTP from anywhere:
      ingress {
     from_port = "${var.server_port}"
     to_port = "${var.server_port}"
     protocol = "tcp"
     cidr_blocks = [""]

    The “var.server_port” is defined in variables file:

    The tag value AWS uses to name the EC2 instance.

    Execution control

    Terraform automatically detects and enforces rule violations, such as use of rogue port numbers other than 80/443.


    Sample contents of an outputs.tf file:

  output "public_ip" {
  value = "${aws_instance.example.public_ip}"
  output "url" {
  value = "http://${aws_instance.example.public_ip}:${var.port}"

Sample contents of an outputs.tf file for a cluster points to the Elastic Load Balancer:

output "elb_dns_name" {
  value = "${aws_elb.example.dns_name}"

### Examples

### Tests

As with Java and other programming code, Terraform coding should be tested too.

Gruntwork has an open-source library to setup and tear down conditions for verifying whether servers created by Terraform actually work.

  • <target=”_blank” href=”https://github.com/gruntwork-io/terratest”> https://github.com/gruntwork-io/terratest</a> is a Go library that makes it easier to write automated tests for your infrastructure code.

It’s written in Go that uses Packer, ssh, and other commands.

The library can be used as the basis to automate experimentation and to collect results (impact of) various configuration changes.

terraform validate

  1. Validate the folder using https://www.terraform.io/docs/commands/validate.html

    terraform validate single-web-server

    If no issues are identified, no message appears. (no news is good news)

    pre-commit hook to validate in your Git repository


    PROTIP: There should be only one main.tf per folder.

    Plug-in Initialization

    Cloud providers are not included with the installer, so…

  2. In your gits folder:

    git clone https://github.com/terraform-providers/terraform-provider-aws.git --depth=1
  3. Initialize Terraform plug-ins:

    terraform init

    Sample response:

    Initializing provider plugins...
           - Checking for available provider plugins on https://releases.hashicorp.com...
           - Downloading plugin for provider "aws" (1.17.0)...
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
           * provider.aws: version = "~> 1.17"
    Terraform has been successfully initialized!
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.

    See https://www.terraform.io/docs/commands/init.html

    This creates a hidden .terraform\plugins" folder path containing a folder for your os - darwin_amd64` for MacOS.


    Provisioner configurations are also plugins.

    Provisioner definitions define the properties of each resource, such as initialization commands. For example, this installs an nginx web server and displays a minimal HTML page:

    provisioner "remote-exec" {
      inline = [
     "sudo yum install nginx -y",
     "sudo service nginx start",
     "echo "<html><head><title>NGINX server</title></head><body style=\"background-color"></body></html>"

    CIDR Subnet function

    variable network_info {
    default = “” #type, default, description
    cidr_block = ${cidrsubnet(var.network_info, 8, 1)} # returns
    cidr_block = ${cidrsubnet(var.network_info, 8, 2)} # returns


    variable network_info {
    default = “” #type, default, description
    cidr_block = ${cidrsubnet(var.network_info, 8, 1)} # returns
    cidr_block = ${cidrsubnet(var.network_info, 8, 2)} # returns

    In this example terraform.tfvars file are credentials for both AWS EC2 and Azure ARM providers:

    bucket_name = "mycompany-sys1-v1"
    arm_subscription_id = "???"
    arm_principal = "???"
    arm_passsord = "???"
    tenant_id = "223d"
    aws_access_key = "insert access key here>"
    aws_secret_key = "insert secret key here"
    private_key_path = "C:\\MyKeys1.pem"

    The private_key_path should be a full path, containing \\ so that the single slash is not interpreted as a special character.

    bucket_name must be globally unique within all of the AWS provider customers.

    Terraforming AWS Configuration

    PROTIP: Install from https://github.com/dtan4/terraforming a Ruby script that enables a command such as:

    terraforming s3 --profile dev

    You can pass profile name by –profile option.


    outputs.tf file example:

    output "aws_elb_public_dns" {
      value = "${aws_elb.web.dns_name}"
    output "public_ip" {
      value = "${aws_instance.example.public_ip}"
    output "azure_rm_dns_cname" {
      value = "${azurerm_dns_cname_record.elb.id}"
  4. PROTIP: If the AMI is no longer available, you will get an error message.

    Terraform Plan

  5. Have Terrform evaluate based on vars in a different (parent) folder:

    terraform plan \
       -var-file='..\terraform.tfvars' \
       -var-file='.\Development\development.tfvars' \
       -state='.\Development\dev.state' \
       -out base-`date-+'%s'`.plan

    The two dots in the command specifies to look above the current folder.

    The -out parameter specifies the output file name. Since the output of terraform plan is fed into the terraform apply command, a static file name is best. However, some prefer to avoid overwriting by automatically using a different date stamp in the file name.

    The “%s” yields a date stamp like 147772345 which is the numer of seconds since the 1/1/1970 epoch.

    A sample response:

    "<computered>" means Terraform figures it out.

    Pluses and minuses flag additions and deletions. This is a key differentiator for Terraform as a “”

    Terraform creates a dependency graph (specfically, a Directed Acyclic Graph). This is so that nodes are built in the order they are needed.

    Terraform apply

  6. Type:

    terraform apply “happy.plan”


    terraform apply -state=”.\develop\dev.state” -var=”environment_name=development”

    Alternative specification of enviornment variables:

    TF_VAR_first_name=John terraform apply

    Values to Terraform variables define inputs such as run-time DNS/IP addresses into Terraform modules.

    What apply does:

    1. Generate model from logical definition (the Desired State).
    2. Load current model (preliminary source data).
    3. Refresh current state model by querying remote provider (final source state).
    4. Calculate difference from source state to target state (plan).
    5. Apply plan.

    NOTE: Built-in functions: https://terraform.io/docs/configuration/interpolation.html

    Sample response from terraform apply:

    dns_names = [
    primaryIp = [

    Ignore state files

    Terraform apply generates .tfstate files (containing JSON) to persist the state of runs. It maps resources IDs to their data.

    PROTIP: CAUTION: tfstate files can contain secrets, so delete them before git add.

  7. In the .gitignore file are files generated during processing, so don’t need to persist in a repository:


    tfstate.backup is created from the most recent previous execution before the current tfstate file contents.

    .terraform/ specifies that the folder is ignored when pushing to GitHub.

    Terraform apply creates a dev.state.lock.info file as a way to signal to other processes to stay away while changes to the environment are underway.

    Remote state

    NOTE terraform.tfstate can be stored over the network in S3, etcd distributed key value store (used by Kubernetes), or a Hashicorp Atlas or Consul server. (Hashicorp Atlas is a licensed solution.)

    State can be obtained using command:

    terraform remote pull

    Apps to install

    NOTE: Software can be specified for installation using Packer’s local-exec provisioner which has Terraform on host machines executes commands. For example, on a Ubuntu machine:

    resource "null_resource" "local-software" {
      provisioner "local-exec" {
     command = <<EOH
    sudo apt-get update
    sudo apt-get install -y ansible

    NOTE: apt-get is in-built within Ubuntu Linux distributions.

    PROTIP: Use this to bootstrap automation such as assigning permissions and running Ansible or PowerShell DSC, then use DSC scripts for more flexibility and easier debugging.

    Output variables

  8. Output Terraform variable:

    output "loadbalancer_dns_name" {
      value = "${aws_elb.loadbalancer.dns_name}"

    Processing flags

    HCL can contain flags that affect processing. For example, within a resource specification, force_destroy = true forces the provider to delete the resource when done.

    Verify websites

  9. The website accessible?

  10. In the provider’s console (EC2), verify

    Destroy to clean up

  11. Destroy instances so they don’t rack up charges unproductively:

    terraform destroy

    PROTIP: Amazon charges for Windows instances by the hour while it charges for Linux by the minute, as other cloud providers do.

  12. Verify in the provider’s console (aws.amazon.com)

Plugins into Terraform

All Terraform providers are plugins - multi-process RPC (Remote Procedure Calls).



Terraform expect plugins to follow a very specific naming convention of terraform-TYPE-NAME. For example, terraform-provider-aws, which tells Terraform that the plugin is a provider that can be referenced as “aws”.

PROTIP: Establish a standard for where plugins are located:

For *nix systems, ~/.terraformrc

For Windows, %APPDATA%/terraform.rc


PROTIP: When writing your own terraform plugin, create a new Go project in GitHub, then locally use a directory structure:


where USERNAME is your GitHub username and NAME is the name of the plugin you’re developing. This structure is what Go expects and simplifies things down the road.


  • Grafana or Kibana monitoring
  • PagerDuty alerts
  • DataDog metrics


Terraform modules provide “blueprints” to deploy.

The module’s source can be on a local disk:

module "service_foo" {
  source = "/modules/microservice"
  image_id = "ami-12345"
  num_instances = 3

The source can be from a GitHub repo such as https://github.com/objectpartners/tf-modules

module "rancher" {
  source = "github.com/objectpartners/tf-modules//rancher/server-standalone-elb-db&ref=9b2e590"
  • Notice “https://” are not part of the source string.
  • Double slashes in the URL above separate the repo from the subdirectory.
  • PROTIP: The ref above is the first 7 hex digits of a commit SHA hash ID. Alternately, semantic version tag value (such as “v1.2.3”) can be specified. This is a key enabler for immutable strategy.

https://registry.terraform.io provides a marketplace of modules. The module to create Hashicorp’s own Vault and Consul on AWS EC2, Azure, GCP. Video of demo by Yevgeniy Brikman:


The above is created by making use of https://github.com/hashicorp/terraform-aws-vault stored as sub-folder hashicorp/vault/aws

terraform init hashicorp/vault/aws
   terraform apply

It’s got 33 resources. The sub-modules are:

  • private-tls-cert (for all providers)
  • vault-cluster (for all providers)
  • vault-lb-fr (for Google only)
  • vault-elb (for AWS only)
  • vault-security-group-rules (for AWS only)

Community modules

Modules help you cope with the many DevOps components and alternatives:


Blogs and tutorials on modules:

Rock Stars

Here are people who have taken time to create tutorials for us:

Derek Morgan in May 2018 released video courses on LinuxAcademy.com:

Dave Cohen in April 2018 made a 5 hands-on videos using Digital Ocean Personal Access Token (PAT).

Seth Vargo, Director of Evangelism at HashiCorp, gave a deep-dive hands-on introduction to Terraform at the O’Reilly conference on June 20-23, 2016. If you have a SafaribooksOnline subscription, see the videos: Part 1 [48:17], Part 2 [37:53]

Yevgeniy (Jim) Brikman (ybrikman.com), co-founder of DevOps as a Service Gruntwork.io

zero-downtime deployment, are hard to express in purely declarative terms.

Comprehensive Guide to Terraform includes:

James Turnbull

Jason Asse

Ned Bellavance (@ned1313 at nerdinthecloud.com) has several video classs on Pluralsight:

Nick Colyer

Kirill Shirinkin

James Nugent

  • Engineer at Hashicorp

Anton Babenko (github.com/antonbabenko linkedin)



https://github.com/dtan4/terraforming is Ruby code.

Kyle Rockman (@Rocktavious, author of Jenkins Pipelines and github.com/rocktavious) presented at HashiConf17 (slides) a self-service app to use Terraform (powered by React+Redux using Jinga2 to Gunicorn + Djanjo back end running HA in AWS) that he hopes to open-source at github.com/underarmour

Others (YouTube videos):

AWS Cloud Formation

Puppet, Chef, Ansible, Salt AWS API libraries Boto, Fog

AWS CloudFormation Sample Templates at https://github.com/awslabs/aws-cloudformation-templates

https://www.safaribooksonline.com/library/view/aws-cloudformation-master/9781789343694/ AWS CloudFormation Master Class by Stéphane Maarek from Packt May 2018

Some CloudFormation templates are compatible with OpenStack Heat templates.


SignalWarrant’s videos on PowerShell by David Keith Hall includes:

Terraform Basics mini-course on YouTube in 5-parts from “tutorialLinux”.

http://chevalpartners.com/devops-infrastructure-as-code-on-azure-platform-with-hashicorp-terraform-part-1/ quotes https://www.hashicorp.com/blog/azure-resource-manager-support-for-packer-and-terraform from 2016 about support for Azure Resource Manager

Sajith Venkit explains Terraform file exampled in his “Building Docker Enterprise 2.1 Cluster Using Terraform” blog and repo for AliCloud and Azure.

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps

  4. Git and GitHub vs File Archival
  5. Git Commands and Statuses
  6. Git Commit, Tag, Push
  7. Git Utilities
  8. Data Security GitHub
  9. GitHub API
  10. TFS vs. GitHub

  11. Choices for DevOps Technologies
  12. Java DevOps Workflow
  13. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  14. AWS server deployment options

  15. Cloud regions
  16. AWS Virtual Private Cloud
  17. Azure Cloud Onramp
  18. Azure Cloud
  19. Azure Cloud Powershell
  20. Bash Windows using Microsoft’s WSL (Windows Subystem for Linux)

  21. Digital Ocean
  22. Cloud Foundry

  23. Packer automation to build Vagrant images
  24. Terraform multi-cloud provisioning automation

  25. Powershell Ecosystem
  26. Powershell on MacOS
  27. Powershell Desired System Configuration

  28. Jenkins Server Setup
  29. Jenkins Plug-ins
  30. Jenkins Freestyle jobs
  31. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  32. Dockerize apps
  33. Docker Setup
  34. Docker Build

  35. Maven on MacOSX

  36. Ansible

  37. MySQL Setup

  38. SonarQube static code scan

  39. API Management Microsoft
  40. API Management Amazon

  41. Scenarios for load