Wilson Mar bio photo

Wilson Mar

Hello. Hire me!

Email me Calendar Skype call 310 320-7878

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

Immutable multi-service provisioning for Infrastructure as Code (IaC)

Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

This tutorial is a step-by-step hands-on deep yet succinct introduction to using Hashicorp’s Terraform to build, change, and version clusters of immutable servers (through load balancers) running in clouds using idempotent declarative specifications (templates). “Idempotent” means that repeat runs don’t change anything if nothing is changed. Thus Terraform defines the “desired state configuration” (DSC).

Terraform is not a “multi-cloud tool” to ease migration among clouds to avoid vendor lock-in. One would need to rewrite all templates to move from, say, AWS to Azure. Terraform doesn’t abstract resources needed to do that.

Terraform is better characterized as a multi-service tool. One tool to manage GitHub/GitLab, Datadog, Digital Ocean, as well as AWS resources. Can’t really do that with CF alone.

Cloud Formation has nested stack.

Terraform can also provision on-premises servers running OpenStack as well as AWS, Azure, Google Cloud, Digitial Ocean, Fastly, and other cloud providers – “anything with an API”.

Automation

Terraform’s marketing page says it make infrastructure provisioning: Repeatable. Versioned. Documented. Automated. Testable. Shareable.

Automating infrastructure deployment consists of these features:

  • Provisioning resources
  • Planning updates
  • Using source control
  • Reusing templates

The objective is to save money by automating the configuration of servers and other resources, which is quicker and more consistent than manually clicking through the GUI.

Infrastructure as Code Competition

The difference between Chef, Puppet, Ansible, SaltStack, AWS CloudFormation, and Terraform:

terraform-comp-colored-650x261-36439(Click to pop-up full screen image colorized from Gruntwork’s blog)

Additionally…

Feature CloudFormation Terraform
Multi-Cloud providers support AWS only AWS, GCE, Azure (20+)
Source code closed-source open source
Open Source contributions? No Yes (GitHub issues)
State management by AWS within Terraform
GUI* Free Console licen$ed*
Configuration format JSON HCL JSON
Execution control* No Yes
Iterations No Yes
Manage already created resources No Yes (hard)
Failure handling Optional rollback Fix & retry
Logical comparisons No Limited
Extensible Modules No Yes

Terraform and Ansible can work in unison and complement each other. Terraform can bootstrap the underlying cloud infrastructure and then Ansible provisions the user space. To test a service on a dedicated server, skip using Terraform and run the Ansible playbook on that machine. Derek Morgan has a “Deploy to AWS with Ansible and Terraform” video class at LinuxAcademy which shows how to do just that, with code and diagram.

“Immutable” means once instantiated, it doesn’t change. In DevOps, this strategy means individual servers are treated like “cattle” (removed from the herd) and not as “pets” (courageously kept alive as long as possible).

“When I make a mistake in a complicated setup, I can get going again quickly and easily with less troubleshooting because I can just re-run the script.”

WARNING: Terraform does not support rollbacks in any meaningful way.

Terraform also provides parallel execution control, iterations, and (perhaps most of all) management of resources already created (desired state configuration) over several cloud providers (not just AWS).

A key differentiator is Terraform’s plan command, which provides more than just a “dry-run” before configurations are applied for real. Under the covers, Terraform plan generates an executable, and uses it to apply, which guarantees that what appeared in plan is the same as with apply.

vs. AWS Cloud Formation

First of all, if you ever want to get AWS certified, you’re going to need to know Cloud Formation. For a company, it comes down to vendor support preferred, which is needed considering that the product has been available only a few years.

Those who create AMI’s now also provide CFN templates to customers.

Some have found Cloud Formation’s references and interpolation to be difficult. Troposphere and Sceptre makes CFN easier to write with basic loops and logic that CFN lacks. But in Sep 2018 CloudFormation got macros to do iteration and interpolation (find-and-replace). Caveat: it does require some dependencies to be setup.

CFN also lacks the ability to upload large objects to S3.

AWS Cloud Formation and Terraform can both be used at the same time. Terraform is often used to handle security groups, IAM resources, VPCs, Subnets, and policy documents in general; while CFN is used for actual infrastructural components, now that cloud formation has released drift detection.

NOTE: “Combined with cfn-init and family, CloudFormation supports different forms of deployment patterns that is much more awkward to do in Terraform. ASGs with different replacement policies, automatic rollbacks based upon Cloudwatch alarms, and so forth are all well documented and work pretty straight forward in CloudFormation due to the state being managed purely internal to AWS. Terraform is not really an application level deployment tool and you wind up rolling your own. Working out an odd mix of null resources and shell commands to deploy an application while trying to roll back is not straightforward and seems like a lot of reinventing the wheel.”

Moreover, security-concious organization make it difficult to use third party products due to time-consuming infosec clearances needed.

Licensing open source for GUI

Code for Terraform is open-sourced at
https://github.com/hashicorp/terraform

Although Terraform is “open source”, the Terraform GUI requires a license.

Paid Pro and Premium licenses of Terraform add version control integration, MFA security, and other enterprise features.

Websites to know

Install Terraform

PROTIP: Terraform is written in the Go language, so (unlike Java) there is no separate VM to download.

  1. Get the version number of Terraform installed:

    terraform --version

    The response I got (at time of writing) is:

    Terraform v0.12.0

    WARNING: The response at time of writing, Terraform is not even “1.0.0” release, meaning it’s in beta maturity:

Install on MacOS

  1. So that you can easily switch among several versions installed of Terraform, install and use the Terraform version manager:

    brew install tfenv

    The response at time of writing:

    ==> Downloading https://github.com/tfutils/tfenv/archive/v0.6.0.tar.gz
    ==> Downloading from https://codeload.github.com/tfutils/tfenv/tar.gz/v0.6.0
    ######################################################################## 100.0%
    🍺  /usr/local/Cellar/tfenv/0.6.0: 19 files, 23.5KB, built in 7 seconds
    

    Source for this is has changed over time: from https://github.com/Zordrak/tfenv (previously from https://github.com/kamatama41/tfenv)

    When tfenv is used, do not install from the website or using :

    brew install terraform

  2. Install the latest version of terraform using tfenv:

    tfenv install latest

    The response:

    [INFO] Installing Terraform v0.12.0
    [INFO] Downloading release tarball from https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_darwin_amd64.zip
    ######################################################################## 100.0%
    [INFO] Downloading SHA hash file from https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_SHA256SUMS
    tfenv: tfenv-install: [WARN] Unable to verify GPG signature unless logged into keybase and following hashicorp
    Archive:  tfenv_download.j57U3f/terraform_0.12.0_darwin_amd64.zip
      inflating: /usr/local/Cellar/tfenv/0.6.0/versions/0.12.0/terraform
    [INFO] Installation of terraform v0.12.0 successful
    [INFO] Switching to v0.12.0
    [INFO] Switching completed
    

    See Hashicorp’s version 12 announcement.

    The above creates folder .terraform.d on your $HOME folder, containing files checkpoint_cache and checkpoint_signature.

  3. Proceed to Configuration.

Install on Windows

  1. In a Run command window as Administrator.
  2. Install Chocolatey cmd:
  3. Install Terraform using Chocolatey:

    choco install terraform -y

    The response at time of writing:

    Chocolatey v0.10.8
    Installing the following packages:
    terraform
    By installing you accept licenses for the packages.
    Progress: Downloading terraform 0.10.6... 100%
     
    terraform v0.10.6 [Approved]
    terraform package files install completed. Performing other installation steps.
    The package terraform wants to run 'chocolateyInstall.ps1'.
    Note: If you don't run this script, the installation will fail.
    Note: To confirm automatically next time, use '-y' or consider:
    choco feature enable -n allowGlobalConfirmation
    Do you want to run the script?([Y]es/[N]o/[P]rint): y
     
    Removing old terraform plugins
    Downloading terraform 64 bit
      from 'https://releases.hashicorp.com/terraform/0.10.6/terraform_0.10.6_windows_amd64.zip'
    Progress: 100% - Completed download of C:\Users\vagrant\AppData\Local\Temp\chocolatey\terraform\0.10.6\terraform_0.10.6_windows_amd64.zip (12.89 MB).
    Download of terraform_0.10.6_windows_amd64.zip (12.89 MB) completed.
    Hashes match.
    Extracting C:\Users\vagrant\AppData\Local\Temp\chocolatey\terraform\0.10.6\terraform_0.10.6_windows_amd64.zip to C:\ProgramData\chocolatey\lib\terraform\tools...
    C:\ProgramData\chocolatey\lib\terraform\tools
     ShimGen has successfully created a shim for terraform.exe
     The install of terraform was successful.
      Software installed to 'C:\ProgramData\chocolatey\lib\terraform\tools'
     
    Chocolatey installed 1/1 packages.
     See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
     
  4. Proceed to Configuration.

Install on Ubuntu

  1. On a Console (after substituing the current version):

    sudo curl -O https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_linux_amd64.zip
    sudo apt-get install unzip
    sudo mkdir /bin/terraform 
    sudo unzip terraform_0.11.5_linux_amd64.zip -d /usr/local/bin/
    
  2. Proceed to Configuration.

Install Docker

  1. To install Docker CE on Linux:

    sudo apt-get update
    sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     software-properties-common
     
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
     
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
     
    sudo apt-get update
    sudo apt-get install docker-ce
    
  2. Proceed to Configuration. (next below)

Ansible, Chef, Puppet


Configuration

Instructions below are for the Command Line.

If you prefer using Python, there is a Python module to provide a wrapper of terraform command line tool at https://github.com/beelit94/python-terraform

Command Alias list & help

  1. For a list of commands, use the abbreviated alternate to the terraform command:

    tf

    Alternately, use the long form:

    terraform

    Either way, the response is a menu (at time of writing):

    Usage: terraform [--version] [--help] <command> [args]
     
    The available commands for execution are listed below.
    The most common, useful commands are shown first, followed by
    less common or more advanced commands. If you're just getting
    started with Terraform, stick with the common commands. For the
    other commands, please read the help and docs before usage.
     
    Common commands:
     apply              Builds or changes infrastructure
     console            Interactive console for Terraform interpolations
     destroy            Destroy Terraform-managed infrastructure
     env                Workspace management
     fmt                Rewrites config files to canonical format
     get                Download and install modules for the configuration
     graph              Create a visual graph of Terraform resources
     import             Import existing infrastructure into Terraform
     init               Initialize a Terraform working directory
     output             Read an output from a state file
     plan               Generate and show an execution plan
     providers          Prints a tree of the providers used in the configuration
     refresh            Update local state file against real resources
     show               Inspect Terraform state or plan
     taint              Manually mark a resource for recreation
     untaint            Manually unmark a resource as tainted
     validate           Validates the Terraform files
     version            Prints the Terraform version
     workspace          Workspace management
     
    All other commands:
     0.12upgrade        Rewrites pre-0.12 module source code for v0.12
     debug              Debug output management (experimental)
     force-unlock       Manually unlock the terraform state
     push               Obsolete command for Terraform Enterprise legacy (v1)
     state              Advanced state management
    

    BLAH: Terraform doesn’t have an alias command like Git to add subcommands, so one has to remember which command is Terragrunt and which are standard Terraform?

    NOTE: The terraform remote command configures remote state storage.

  2. Help on a specific command, for example:

    tf plan --help

    Terraform Console

  3. Open the Terraform Console (REPL) from a Terminal/command shell:

    tf console

    The response is the prompt:

    >
  4. Commands can interpret numbers:

    element(list("one","two","three"),0,2)

    The response is (because counting begins from zero):

    1:3: element: expected 2 arguments, got 3 in:
  5. Type exit or press (on a Mac) control+C to return to your Terminal window.

    The program also expects an additional top level in all .tfvars files:

Community modules

Modules help you cope with the many DevOps components and alternatives:

terraform-devops-vendors-807x352-107086

Blogs and tutorials on modules:

Terragrunt from Gruntwork

A popular replacement of some standard terraform commands are terragrunt commands open-sourced at https://github.com/gruntwork-io/terragrunt by Gruntwork:


   terragrunt get
   terragrunt plan
   terragrunt apply
   terragrunt output
   terragrunt destroy
   

These wrapper commands provide a quick way to fill in gaps in Terraform - providing extra tools for working with multiple Terraform modules, managing remote state, and keeping DRY (Don’t Repeat Yourself), so that you only have to define it once, no matter how many environments you have.

Unlike Terraform, Terragrunt can configure remote state, locking, extra arguments,etc.

WARNING: There are some concerns about Terragrunt’s use of invalid data structures. See https://github.com/gruntwork-io/terragrunt/issues/466

Install on MacOS:

  1. To install Terragrunt on macOS:

    brew install terragrunt

To define:

terragrunt = {
     # (put your Terragrunt configuration here)
   }

Provider credentials

Since the point of Terraform is to get you into clouds, Terraform looks for specific environment variables containing AWS credentials.

  1. Got to IAM in AWS to define a user with a password.
  2. Grant rules to the AWS user to use services.
  3. Mac users: add credentials in their ~/.bash_profile these lines:

    export AWS_ACCESS_KEY_ID=(your access key id)
    export AWS_SECRET_ACCESS_KEY=(your secret access key)
    

    For Azure:

    AZ_PRINCIPAL=""
    AZ_USER=""
    AZ_PASSWORD=""
    AZ_USERNAME=""
    AZ_TENANT=""
    AZ_REGION=""
    

    For Google Cloud:

    GCP_PROJECT=""
    GCP_USER=""
    GCP_KEY=""
    GCP_REGION=""
    

PROTIP: Specifying passwords in enviornment variables is more secure than typing passwords in tf files*.

Sample Terraform scripts

Gruntwork’s sample

Gruntwork.io offers (for $4,950), access to their 250,000-line Reference Architecture of starter code to create a production-worthy “defense in depth” setup on AWS:

terraform-ref-arch-683x407-106209

An additional $500 a month gets you access to their Reference Architecture Walktrough video class. But previews of the class is free:

For those without the big bucks, Yevegeniy (Jim) Brikman (ybrikman.com, co-founder of DevOps as a Service Gruntwork.io) has generously shared:

The sample scripts referenced by this tutorial contain moustache variable mark-up so that you can generate a set for your organization.

HCL (Hashicorp Configuration Language)

Terraform defined HCL (Hashicorp Configuration Language) for both human and machine consumption. HCL is defined at https://github.com/hashicorp/hcl and described at https://www.terraform.io/docs/configuration/syntax.html.

The minimal HCL specifies the provider cloud, instance type used to house the AMI, which is specific to a region:

provider "aws" {
     access_key = "ACCESS_KEY_HERE"
     secret_key = "SECRET_KEY_HERE"
     region = "us-east-1"
   }
   resource "aws_instance" "example" {
      ami = "ami-2757f631"
      instance_type = "t2.micro"
   }

Each block defined between curly braces is called a “stanza”.

HCL is less verbose than JSON and more concise than YML. *

More importantly, unlike JSON and YML, HCL allows annotations (comments). As in bash scripts: single line comments start with # (pound sign) or // (double forward slashes). Multi-line comments are wrapped between /* and */.

\ back-slashes specify continuation of long lines (as in Bash).

Values can be interpolated using syntax wrapped in ${}, called interpolation syntax, in the format of ${type.name.attribute}. For example, $\{aws.instance.base.id\} is interpolated to something like i-28978a2. Literal $ are coded by doubling up $$.

More importantly, tf files are declarative, meaning that they define the desired end-state (outcomes). If 15 servers are declared, Terraform automatically adds or removes servers to end up with 15 servers rather than specifying procedures to add 5 servers.

Terraform can do that because Terraform knows how many servers it has setup already.

HCL does not contain conditional if/else logic, which is why modules (described below) are necessary.

HCL2 is the new experimental version that combines the interpolation language HIL to produce a single configuration language that supports arbitrary expressions. It’s not backward compatible, with no direct migration path.

Terraform processes all .tf files in the directory invoked, in alphabetical order.

AWS EC2 Credentials

The above minimal HCL can be in a file named ec2.tf.

PROTIP: Including in tf files AWS credentials such as these might inadvantly made visible to the public by getting checked into a public repository:

     access_key = "ACCESS_KEY_HERE"
     secret_key = "SECRET_KEY_HERE"
   

If you simply leave out AWS credentials, Terraform will automatically search for saved API credentials (for example, in ~/.aws/credentials) or IAM instance profile credentials.

An important distinction between Cloud Formmation and Terraform is that Terraform users track the state of each resource.

Terraform Enterprise automatically store the history of all state revisions.

See https://www.terraform.io/docs/state/index.html

Saving tfstate in S3 Backend

In a team environment, it helps to store state state files off a local disk and in a “backend” location central to all.

  1. Using AWS IAM, define a AWS user with Permissions in a Role.
  2. Obtain and save credentials for user in an environment variable.

    VIDEO: Terraform Remote State on Amazon S3 describes use of a file named backend.tf, such as this AWS S3 specification, after substituting “YouOwn” with the (globally unique) S3 bucket name defined with the current AWS credentials:

    terraform {
      backend "s3" {
     bucket = "YouOwn-terraform"
     key    = "terraform.tfstate"
     region = "us-east-1"
      }
    }
    

    Apply to create tfstate

  3. While in the same folder where there is a “backend.tf” file (above), have Terraform read the above to establish an EC2 instance when given the command:

    tf apply
  4. Confirm by typing “yes”.

    A new file terraform.tfstate is created to save the configuration state.

  5. Manually verify on the AWS Management Console webpage set to service S3.

    Destroy tfstate

  6. While in the same folder where there is a “backend.tf” file (above), have Terraform read the above to establish an EC2 instance when given the command:

    tf destroy
  7. Confirm by typing “yes”.

    The file terraform.tfstate should be deleted.

  8. Manually verify on the AWS Management Console webpage set to service S3.

Validate .tf files

  1. Navigate into the repo and view files in:

    ls single-web-server

    The contents:

    README.md    main.tf     outputs.tf   variables.tf
    

    This set can be within a sub-module folder.

    Credentials in tfvars

    Define cloud account credentials in a terraform.tfvars file containing;

    aws_access_key = "YourAWSAccessKey"
    aws_secret_key = "YourAWSSecretKey"
    private_key_path = "C:\\PathToYourPrivateKeys\PrivateKey.pem"
    accountId = "YourAWSAccountID"
    

    This is not good security to risk such information in a repo potentially shared.

    tfvars environments

    PROTIP: Separate Terraform configurations by a folder for each environment.

    • base (template for making changes)
    • dev
    • loadtest (performance/stress testing)
    • stage
    • uat (User Acceptance Testing)
    • prod
    • demo (demostration used by salespeople)
    • train (for training users)

  2. Navigate into the base folder.

    PROTIP: Terraform commands act only on the current directory, and does not recurse into sub directories.

  3. View the development.tfvars file:

    environment_tag = "dev"
    tenant_id = "223d"
    billing_code_tag = "DEV12345"
    dns_site_name = "dev-web"
    dns_zone_name = "mycorp.xyz"
    dns_resource_group = "DNS"
    instance_count = "2"
    subnet_count = "2"
    

    The production.tfvars file usually instead contain more instances and thus subnets that go through a load balancer for auto-scaling:

    environment_tag = "prod"
    tenant_id = "223d"
    billing_code_tag = "PROD12345"
    dns_site_name = "marketing"
    dns_zone_name = "mycorp.com"
    dns_resource_group = "DNS"
    instance_count = "6"
    subnet_count = "3"
    

    All these would use main_config.tf and variables.tf files commonly used for all environments:

    Tag for cost tracking by codes identifying a particular budget, project, department, etc.

    variables.tf (vars.tf)

    This file contains a reference to environment variables:

       
    variable "aws_access_key" {}
    variable "aws_secret_key" {}
     
    variable "subnet_count" {
      default = 2
    }
    

    An example of the variables.tf file explained in video: Get started managing a simple application with Terraform February 21, 2018 - by Alexandra White (at Joyant) shows the deployment of the Happy Randomizer app

    variable "image_name" {
      type        = "string"
      description = "The name of the image for the deployment."
      default     = "happy_randomizer"
    }
    variable "image_version" {
      type        = "string"
      description = "The version of the image for the deployment."
      default     = "1.0.0"
    }
    variable "image_type" {
      type        = "string"
      description = "The type of the image for the deployment."
      default     = "lx-dataset"
    }
    variable "package_name" {
      type        = "string"
      description = "The package to use when making a deployment."
      default     = "g4-highcpu-128M"
    }
    variable "service_name" {
      type        = "string"
      description = "The name of the service in CNS."
      default     = "happiness"
    }
    variable "service_networks" {
      type        = "list"
      description = "The name or ID of one or more networks the service will operate on."
      default     = ["Joyent-SDC-Public"]
    }
    

    In a cluster enviornment:

       
    variable "server_port" {
      description = "The port the server will use for HTTP requests"
      default = 8080
    }

    PROTIP: Each input should be defined as a variable.

    Defaults and lookup function

    PROTIP: Variables can be assigned multiple default values selected by a lookup function:

    # AWS_ACCESS_KEY_ID
    # AWS_SECRET_ACCESS_KEY
    # export AWS_DEFAULT_REGION=xx-yyyy-0
     
    variable "server_port" {
      description = "The port the server will use for HTTP requests"
      default = 8080
    }
    variable "amis" {
      type = "map”"
      default = {
     us-east-1 = "ami-1234"
     us-west-1 = "ami-5678"
      }
    }
    ami = ${lookup(var.amis, "us-east-1")}
    

    PROTIP: With AWS EC2, region “us-east-1” must be used as the basis for creating others.

    NOTE: Amazon has an approval process for making AMIs available on the public Amazon Marketplace.

    main.tf

    An example of the main.tf file:

    terraform {
      required_version = ">= 0.8, < 0.9"
    }
    provider "aws" {
      alias = "NorthEast"
      region = "us-east-1"
      access_key = "${var.AWS_ACCESS_KEY}"
      secret_key = "${var.AWS_SECRET_KEY}"
    }
    resource "aws_instance" "web" {
      ami           = "ami-40d28157"
      instance_type = "t2.micro"
      subnet_id     = "subnet-c02a3628"
      vpc_security_group_ids = ["sg-a1fe66aa"]
      tags {
     Identity = "..."
      }
    }
    

    NOTE: Components of Terraform are: provider, resource, provision.

    “t1.micro” qualifies for the Amazon free tier available to first-year subscribers.

    PROTIP: Vertically aligning values helps to make information easier to find.

    The ami (amazon machine image) identifier is obtained from Amazon’s catalog of public images.

    subnet_id is for the VPC and vpc_security_group_ids array.

    tags_identity is to scope permissions.

    See http://www.antonbabenko.com/2016/09/21/how-i-structure-terraform-configurations.html

    Another example is from the Terransible lab and course

    Terraform Cloud Providers

    Terraform translates HCL into API calls defined in (at last count, 109) cloud provider repositories from Terraform, Inc. at:

    https://github.com/terraform-providers

    Terraform Cloud Providers

    “aws”, “google”, “google-beta”, “azurerm”, “azuread”, “heroku”, Kubernetes, “gitlab”, DigitalOcean, Heroku, GitHub, OpenStack, “cloudscale”, “cloudstack”, “opentelekomcloud”, “oci” (Oracle Cloud Infrastructure), “opc” (Oracle Public Cloud), “oracclepass” (Oracle Platform Cloud), “flexibleengine”, “nsxt”, “rancher”, “rancher2”, (VMware NSX-T), “vcd” (VMware vCloud Director ), “openstack”, “azurestack”, “scaleway”, “UCloud”, “JDcloud”, Joyent Triton, Circonus, NaverCloud, TelefonicaOpenCloud, oneandone, Skytap, etc.

    In China: “alicloud”, “huaweicloud”, “tencentcloud”, etc.

    Monitoring and other infrastructure services vendors: “datadog”, “grafana”, “newrelic”, “pagerduty”, “bigip” (F5 BigIP), “RabbitMQ”, “acme”, “yandex”, “ciscoasa” (ASA), etc.

    CDN vendors: Dyn, “fastly”, “cloudflare”, “netlify”, “packet” (Terraform Packet), “consul” (Terraform Consul), “nutanix”, “ignition”, “dnsimple”, “fortis”, LogicMonitor, “profitbricks”, “statuscake”, etc.

    Database and repositories: “influxdb”, “mysql”, “postgresql”, “vault” (Terraform), “bitbucket”, “github”, “archive”, etc.

    Servers: “docker”, “dns”, UltraDNS, “helm” (Terraform), “http”, “vsphere” (VMware vSphere), etc.

    chef, “spotinst”, “linode”, “hedvig”, “selectel”, “brightbox”, “OVH”, “nomad”, “local”, Panos, NS1, “rundeck”, VMWare vRA7, random, external, “null”, Icinga2, Arukas, runscope, etc.

    The follow have been archived: Atlas (Terraform), “clc” (CenturyLinkCloud), OpsGenie, (IBM) SoftLayer, PowerDNS, DNSMadeEasy, Librato, Mailgun, LogEntries, Gridscale, CIDR, etc.

    https://github.com/hashicorp/terraform/tree/master/builtin/providers

Terraform Providers

The steps below are based on https://www.terraform.io/intro/examples and implemented in the setup scripts at: https://github.com/wilsonmar/mac-setup which performs the following steps for you:

  1. Install a Git client if you haven’t already.
  2. Use an internet browser (Chrome) to see the sample assets at:

    https://github.com/terraform-providers/terraform-provider-aws.git

  3. If you are going to make changes, click the Fork button.
  4. Create or navigate to a container folder where new repositories are added. For example:

    ~/gits/wilsonmar/tf-sample

  5. Get the repo onto your laptop (substituting “wilsonmar” with your own account name):

    git clone https://github.com/terraform-providers/terraform-provider-aws.git --depth=1 && cd tf-sample

    The above is one line, but may be word-wrapped on your screen.

    The response at time of writing:

    Cloning into 'tf-sample'...
    remote: Counting objects: 12, done.
    remote: Compressing objects: 100% (12/12), done.
    remote: Total 12 (delta 1), reused 9 (delta 0), pack-reused 0
    Unpacking objects: 100% (12/12), done.
    
  6. PROTIP: Make sure that the AWS region is what you want.

    https://www.terraform.io/docs/providers/aws/r/instance.html AWS provider

    VPC Security Group

  7. VPC Security group

    The example in Gruntwork’s intro-to-terraform also specifies the vpc security group:

    resource "aws_instance" "example" {
      \# Ubuntu Server 14.04 LTS (HVM), SSD Volume Type in us-east-1
      ami = "ami-2d39803a"
      instance_type = "t2.micro"
      vpc_security_group_ids = ["${aws_security_group.instance.id}"]
      user_data = <<-EOF
               #!/bin/bash
               echo "Hello, World" > index.html
               nohup busybox httpd -f -p "${var.server_port}" &
               EOF
      tags {
     Name = "ubuntu.t2.hello.01"
      }
    }
    resource "aws_security_group" "instance" {
      name = "terraform-example-instance"
      \# Inbound HTTP from anywhere:
      ingress {
     from_port = "${var.server_port}"
     to_port = "${var.server_port}"
     protocol = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
      }
    }
    

    The “var.server_port” is defined in variables file:

    The tag value AWS uses to name the EC2 instance.

    Execution control

    Terraform automatically detects and enforces rule violations, such as use of rogue port numbers other than 80/443.

    outputs.tf

    Sample contents of an outputs.tf file:

      output "public_ip" {
      value = "${aws_instance.example.public_ip}"
    }
      output "url" {
      value = "http://${aws_instance.example.public_ip}:${var.port}"
    }
    

    Sample contents of an outputs.tf file for a cluster points to the Elastic Load Balancer:

    output "elb_dns_name" {
      value = "${aws_elb.example.dns_name}"
    }
    

    Examples

    Tests

    As with Java and other programming code, Terraform coding should be tested too.

    Gruntwork has an open-source library to setup and tear down conditions for verifying whether servers created by Terraform actually work.

    • <target=”_blank” href=”https://github.com/gruntwork-io/terratest”> https://github.com/gruntwork-io/terratest</a> is a Go library that makes it easier to write automated tests for your infrastructure code.

    It’s written in Go that uses Packer, ssh, and other commands.

    The library can be used as the basis to automate experimentation and to collect results (impact of) various configuration changes.

terraform validate

  1. Validate the folder (see https://www.terraform.io/docs/commands/validate.html)

    terraform validate single-web-server

    If no issues are identified, no message appears. (no news is good news)

  2. Add a pre-commit hook to validate in your Git repository

    Main.tf

    PROTIP: There should be only one main.tf per folder.

    Plug-in Initialization

    Cloud providers are not included with the installer, so…

  3. In your gits folder:

    git clone https://github.com/terraform-providers/terraform-provider-aws.git --depth=1
    
  4. Initialize Terraform working directory (like git init) plug-ins:

    terraform init

    Sample response:

    Initializing provider plugins...
           - Checking for available provider plugins on https://releases.hashicorp.com...
           - Downloading plugin for provider "aws" (1.17.0)...
     
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
     
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
     
           * provider.aws: version = "~> 1.17"
     
    Terraform has been successfully initialized!
     
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
     
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
    

    See https://www.terraform.io/docs/commands/init.html

    This creates a hidden .terraform\plugins" folder path containing a folder for your os - darwin_amd64` for MacOS.

    Provisioners

    Provisioner configurations are also plugins.

    Provisioner definitions define the properties of each resource, such as initialization commands. For example, this installs an nginx web server and displays a minimal HTML page:

    provisioner "remote-exec" {
      inline = [
     "sudo yum install nginx -y",
     "sudo service nginx start",
     "echo "<html><head><title>NGINX server</title></head><body style=\"background-color"></body></html>"
      ]
    }
    

    CIDR Subnet function

    variable network_info {
    default = “10.0.0.0/8” #type, default, description
    }
    cidr_block = ${cidrsubnet(var.network_info, 8, 1)} # returns 10.1.0.0/16
    cidr_block = ${cidrsubnet(var.network_info, 8, 2)} # returns 10.2.0.0/16
      

    Also:

    variable network_info {
    default = “10.0.0.0/8” #type, default, description
    }
    cidr_block = ${cidrsubnet(var.network_info, 8, 1)} # returns 10.1.0.0/16
    cidr_block = ${cidrsubnet(var.network_info, 8, 2)} # returns 10.2.0.0/16
      

    In this example terraform.tfvars file are credentials for both AWS EC2 and Azure ARM providers:

    bucket_name = "mycompany-sys1-v1"
    arm_subscription_id = "???"
    arm_principal = "???"
    arm_passsord = "???"
    tenant_id = "223d"
    aws_access_key = "insert access key here>"
    aws_secret_key = "insert secret key here"
    private_key_path = "C:\\MyKeys1.pem"
    

    The private_key_path should be a full path, containing \\ so that the single slash is not interpreted as a special character.

    bucket_name must be globally unique within all of the AWS provider customers.

    Terraforming AWS Configuration

    PROTIP: Install from https://github.com/dtan4/terraforming a Ruby script that enables a command such as:

    terraforming s3 --profile dev
    

    You can pass profile name by –profile option.

    Output

    outputs.tf file example:

    output "aws_elb_public_dns" {
      value = "${aws_elb.web.dns_name}"
    }
    output "public_ip" {
      value = "${aws_instance.example.public_ip}"
    }
    output "azure_rm_dns_cname" {
      value = "${azurerm_dns_cname_record.elb.id}"
    }
    
  5. PROTIP: If the AMI is no longer available, you will get an error message.

    Terraform Plan

  6. Have Terrform evaluate based on vars in a different (parent) folder:

    
    terraform plan \
       -var-file='..\terraform.tfvars' \
       -var-file='.\Development\development.tfvars' \
       -state='.\Development\dev.state' \
       -out base-`date-+'%s'`.plan
    

    The two dots in the command specifies to look above the current folder.

    The -out parameter specifies the output file name. Since the output of terraform plan is fed into the terraform apply command, a static file name is best. However, some prefer to avoid overwriting by automatically using a different date stamp in the file name.

    The “%s” yields a date stamp like 147772345 which is the numer of seconds since the 1/1/1970 epoch.

    A sample response:

    "<computered>" means Terraform figures it out.
    

    Pluses and minuses flag additions and deletions. This is a key differentiator for Terraform as a “”

    Terraform creates a dependency graph (specfically, a Directed Acyclic Graph). This is so that nodes are built in the order they are needed.

    Terraform apply

  7. Type:

    terraform apply “happy.plan”

    Alternately,

    terraform apply -state=”.\develop\dev.state” \ -var=”environment_name=development”

    Alternative specification of enviornment variable:

    TF_VAR_first_name="John" terraform apply
    

    Values to Terraform variables define inputs such as run-time DNS/IP addresses into Terraform modules.

    What apply does:

    1. Generate model from logical definition (the Desired State).
    2. Load current model (preliminary source data).
    3. Refresh current state model by querying remote provider (final source state).
    4. Calculate difference from source state to target state (plan).
    5. Apply plan.

    NOTE: Built-in functions: https://terraform.io/docs/configuration/interpolation.html

    Sample response from terraform apply:

    dns_names = [
       [
          359f20b2-673d-6300-e918-fcea6a314a26.inst.d9a01feb-be7d-6a32-b58d-ec4a2bf4ba7d.us-east-3.triton.zone,
          happy-randomizer.inst.d9a01feb-be7d-6a32-b58d-ec4a2bf4ba7d.us-east-3.triton.zone
       ]
    ]
    primaryIp = [
       165.225.173.96
    ]
    

    State management

    Although AWS manages state with CloudFormation, to be cloud-agnostic, Terraform users needs to manage state (using Terraform features).

    terraform apply generates .tfstate files (containing JSON) to persist the state of runs by mapping resource IDs to their data.

    PROTIP: CAUTION: tfstate files can contain secrets, so delete them before git add.

  8. In the .gitignore file are files generated during processing, so don’t need to persist in a repository:

    terraform.tfstate*
    *.tfstate
    *.tfstate.backup
    .terraform/
    *.iml
    *.plan
    vpc
    

    tfstate.backup is created from the most recent previous execution before the current tfstate file contents.

    .terraform/ specifies that the folder is ignored when pushing to GitHub.

    Terraform apply creates a dev.state.lock.info file as a way to signal to other processes to stay away while changes to the environment are underway.

    Remote state

    NOTE terraform.tfstate can be stored over the network in S3, etcd distributed key value store (used by Kubernetes), or a Hashicorp Atlas or Consul server. (Hashicorp Atlas is a licensed solution.)

    State can be obtained using command:

    terraform remote pull

    Apps to install

    NOTE: Software can be specified for installation using Packer’s local-exec provisioner which has Terraform on host machines executes commands. For example, on a Ubuntu machine:

    resource "null_resource" "local-software" {
      provisioner "local-exec" {
     command = <<EOH
    sudo apt-get update
    sudo apt-get install -y ansible
    EOH
      }
    }
    

    NOTE: apt-get is in-built within Ubuntu Linux distributions.

    PROTIP: Use this to bootstrap automation such as assigning permissions and running Ansible or PowerShell DSC, then use DSC scripts for more flexibility and easier debugging.

    Output variables

  9. Output Terraform variable:

    output "loadbalancer_dns_name" {
      value = "${aws_elb.loadbalancer.dns_name}"
    }
    

    Processing flags

    HCL can contain flags that affect processing. For example, within a resource specification, force_destroy = true forces the provider to delete the resource when done.

    Verify websites

  10. The website accessible?

  11. In the provider’s console (EC2), verify

    Destroy to clean up

  12. Destroy instances so they don’t rack up charges unproductively:

    terraform destroy

    PROTIP: Amazon charges for Windows instances by the hour while it charges for Linux by the minute, as other cloud providers do.

  13. Verify in the provider’s console (aws.amazon.com)

Plugins into Terraform

All Terraform providers are plugins - multi-process RPC (Remote Procedure Calls).

https://github.com/hashicorp/terraform/plugin

https://terraform.io/docs/plugins/index.html

Terraform expect plugins to follow a very specific naming convention of terraform-TYPE-NAME. For example, terraform-provider-aws, which tells Terraform that the plugin is a provider that can be referenced as “aws”.

PROTIP: Establish a standard for where plugins are located:

For *nix systems, ~/.terraformrc

For Windows, %APPDATA%/terraform.rc

https://www.terraform.io/docs/internals/internal-plugins.html

PROTIP: When writing your own terraform plugin, create a new Go project in GitHub, then locally use a directory structure:

$GOPATH/src/github.com/USERNAME/terraform-NAME

where USERNAME is your GitHub username and NAME is the name of the plugin you’re developing. This structure is what Go expects and simplifies things down the road.

TODO:

  • Grafana or Kibana monitoring
  • PagerDuty alerts
  • DataDog metrics

Modules

A Terraform module is a container for multiple resources that are used together.

Terraform modules provide “blueprints” to deploy.

The module’s source can be on a local disk:

module "service_foo" {
  source = "/modules/microservice"
  image_id = "ami-12345"
  num_instances = 3
}
   

The source can be from a GitHub repo such as https://github.com/objectpartners/tf-modules

module "rancher" {
  source = "github.com/objectpartners/tf-modules//rancher/server-standalone-elb-db&ref=9b2e590"
}
   
  • Notice “https://” are not part of the source string.
  • Double slashes in the URL above separate the repo from the subdirectory.
  • PROTIP: The ref above is the first 7 hex digits of a commit SHA hash ID. Alternately, semantic version tag value (such as “v1.2.3”) can be specified. This is a key enabler for immutable strategy.

https://registry.terraform.io provides a marketplace of modules. The module to create Hashicorp’s own Vault and Consul on AWS EC2, Azure, GCP. Video of demo by Yevgeniy Brikman:

terraform-mod-vaults-640x114-16475.jpg

The above is created by making use of https://github.com/hashicorp/terraform-aws-vault stored as sub-folder hashicorp/vault/aws

terraform init hashicorp/vault/aws
   terraform apply

It’s got 33 resources. The sub-modules are:

  • private-tls-cert (for all providers)
  • vault-cluster (for all providers)
  • vault-lb-fr (for Google only)
  • vault-elb (for AWS only)
  • vault-security-group-rules (for AWS only)

Rock Stars

Here are people who have taken time to create tutorials for us:

Derek Morgan in May 2018 released video courses on LinuxAcademy.com:

Dave Cohen in April 2018 made a 5 hands-on videos using Digital Ocean Personal Access Token (PAT).

Seth Vargo, Director of Evangelism at HashiCorp, gave a deep-dive hands-on introduction to Terraform at the O’Reilly conference on June 20-23, 2016. If you have a SafaribooksOnline subscription, see the videos: Part 1 [48:17], Part 2 [37:53]

Saurav Sharma created a YouTube Playlist that references code at https://github.com/Cloud-Yeti/aws-labs as starters for website of videos and on Udemy.

Yevgeniy (Jim) Brikman (ybrikman.com), co-founder of DevOps as a Service Gruntwork.io

zero-downtime deployment, are hard to express in purely declarative terms.

Comprehensive Guide to Terraform includes:

James Turnbull

Jason Asse

Ned Bellavance (@ned1313 at nerdinthecloud.com) has several video classs on Pluralsight:

Nick Colyer

Kirill Shirinkin

James Nugent

  • Engineer at Hashicorp

Anton Babenko (github.com/antonbabenko linkedin)

dtan4

http://terraforming.dtan4.net

https://github.com/dtan4/terraforming is Ruby code.

Kyle Rockman (@Rocktavious, author of Jenkins Pipelines and github.com/rocktavious) presented at HashiConf17 (slides) a self-service app to use Terraform (powered by React+Redux using Jinga2 to Gunicorn + Djanjo back end running HA in AWS) that he hopes to open-source at github.com/underarmour

Others (YouTube videos):

AWS Cloud Formation

Puppet, Chef, Ansible, Salt AWS API libraries Boto, Fog

AWS CloudFormation Sample Templates at https://github.com/awslabs/aws-cloudformation-templates

https://www.safaribooksonline.com/library/view/aws-cloudformation-master/9781789343694/ AWS CloudFormation Master Class by Stéphane Maarek from Packt May 2018

Some CloudFormation templates are compatible with OpenStack Heat templates.

References

SignalWarrant’s videos on PowerShell by David Keith Hall includes:

Terraform Basics mini-course on YouTube in 5-parts from “tutorialLinux”.

http://chevalpartners.com/devops-infrastructure-as-code-on-azure-platform-with-hashicorp-terraform-part-1/ quotes https://www.hashicorp.com/blog/azure-resource-manager-support-for-packer-and-terraform from 2016 about support for Azure Resource Manager

Sajith Venkit explains Terraform file exampled in his “Building Docker Enterprise 2.1 Cluster Using Terraform” blog and repo for AliCloud and Azure.

AWS Cloudformation vs Terraform: Prepare for DevOps/ Cloud Engineer Interview

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps

  4. Git and GitHub vs File Archival
  5. Git Commands and Statuses
  6. Git Commit, Tag, Push
  7. Git Utilities
  8. Data Security GitHub
  9. GitHub API
  10. TFS vs. GitHub

  11. Choices for DevOps Technologies
  12. Java DevOps Workflow
  13. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  14. AWS server deployment options

  15. Cloud regions
  16. AWS Virtual Private Cloud
  17. Azure Cloud Onramp
  18. Azure Cloud
  19. Azure Cloud Powershell
  20. Bash Windows using Microsoft’s WSL (Windows Subystem for Linux)

  21. Digital Ocean
  22. Cloud Foundry

  23. Packer automation to build Vagrant images
  24. Terraform multi-cloud provisioning automation

  25. Powershell Ecosystem
  26. Powershell on MacOS
  27. Powershell Desired System Configuration

  28. Jenkins Server Setup
  29. Jenkins Plug-ins
  30. Jenkins Freestyle jobs
  31. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  32. Dockerize apps
  33. Docker Setup
  34. Docker Build

  35. Maven on MacOSX

  36. Ansible

  37. MySQL Setup

  38. SonarQube static code scan

  39. API Management Microsoft
  40. API Management Amazon

  41. Scenarios for load