Wilson Mar bio photo

Wilson Mar

Hello!

Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

How to keep secrets secret, but still shared and refreshed.

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Estonian   اَلْعَرَبِيَّةُ (Egypt Arabic)   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

HashiCorp Vault’s basic job is to provide applications client service tokens to access databases and other services:

hashicorp-vault-auth-flow-1018x268

  1. Authenticate with Vault (which coordinates with enterprise email, SMIL, and LDAP systems)
  2. Vault verifies the identity of the application with a Trusted Platform (AWS, etc.)
  3. Verification is obtained
  4. Return a client token for the application. The token has an attached policy, which is mapped at authentication time, as the policy is deny all capabilities by default.

See https://www.vaultproject.io/docs/what-is-vault

Vault also can support PKI (Private Key Infrastructure) used to handle certificates.

Vault enables a better Security Posture by replacing static long-running secrets (to be stolen) with dynamic secrets with a Time to Live (TTL) of a few hours. The system max TTL default is 32 days. The bulk of work by Vault is renewing tokens and rotating secret keys. All so that risk of unauthorized access can be minimized.

Vault also enables a 3rd-party secrets provider across multiple clouds (AWS, Azure, GCP, etc.) and providing a detailed audit log.

As with other SaaS products, one can interact with Vault using its GUI, CLI, or API. This course assumes participants bring a Mac or Windows laptop and have prior experience with Linux CLI commands.

Vault’s secret handling features are provided several ways. The unique contribution of this article is to provide a deep yet concise approach, done by using automation which are then explained.

https://cloud.hashicorp.com/docs/vault summarizes the differences between “Self-managed” and HCP Vault cluster.

Here is a hands-on tutorial about how to install and use HashiCorp’s Vault (vaultproject.io) to securely store secrets key/value pairs, in a High Availability approach.

Pricing: HashiCorp provides Vault free under open-source licensing. Pay for an Enterprise license for MFA, Replication, Diaster Recovery, Namespaces, Monitoring, FIPS 140-2 and quicker support. HashiCorp can provide a list of services partners.

https://github.com/hashicorp/vault-guides provides the technical content to support the Vault learn site.

Use cases

Once a Vault instance is available, we dive into:


What are secrets?

A secret is any “clear text” that you want to tightly control access to, such as API keys, passwords, SSH private certificates, and more.

Questions for secrets management:

  1. How do applications get secrets?
  2. How do humans acquire secrets?
  3. How are secrets updated? (rotated)
  4. How is a secret revoked?
  5. When were secrets used? (lookup in usage logs)
  6. What do we do in the event of compromise?

Requirements for secret keeping

Storing plain-text secrets hard-coded in program code stored in GitHub is like leaving Amazon packages on your door for a long time.

Even if secrets are encrypted (using GPG), machines are powerful enough and hackers have enough time to figure out how to crack encryption algorithms, given enough time.

And you can’t simply remove a file in GitHub because old versions hidden in history can be decrypted using old keys.

Coverage of what features a secrets service should have:

  • Server installed in sealed mode (provides no access)

    Only the storage backend (which durably stores encrypted data) and the HTTP API are outside the barrier which is sealed and unsealed.

  • RBAC (Role-based Access Control) so each user has only the rights for his/her specific role. This has to be enabled in Kubernetes:

    --authorization-mode=RBAC
  • Limit access to designated containers

  • Encrypted transmission with Mutual authentication (MTLS)

  • Audit logging

  • Change value of an existing secret (key rotation) without rebooting. This is the strong point with Vault.

  • Revocation

  • Multi-cloud support in HCP started in 2022 with AWS, and moving to AZure.

Competitors

Alternatives to HashiCorp Vault include

HashiCorp’s Value Proposition

HashiCorp first released Vault in 2015.

VIDEO: Introduction to HashiCorp Vault Mar 23, 2018 by Armon Dadgar, HashiCorp’s CTO, is a whiteboard talk about avoiding “secret sprawl” living in clear text with empheral (temporary) passwords and cryptographic offload to a central service: hashicorp-vault-dadgar-927x522-94211

As of this writing, a unique strong point with Vault is that it can change the value of an existing secret (key rotation) without rebooting.

HashiCorp Vault can be deployed to practically any environment, and does not require any special hardware (such as physical HSMs (Hardware Security Modules).

The value that HashiCorp Vault offers is centralizing secrets handling across organizations by automating replacement of long-lived secrets with dynamically generated secrets (asymetric X.509 certificates) which have a controlled lease period. Vault forces a mandatory lease contract with clients. All secrets read from Vault have an associated lease to enable key usage auditing, perform key rolling, and ensure automatic revocation. Vault provides multiple revocation mechanisms to give operators a clear “break glass” procedure after a potential compromise.

Toward that, HashiCorp provides an “Encryption as a Service” in the public cloud> to enterprises.

Vault provides high-level policy management, secret leasing, audit logging, and automatic revocation.

Vault from HashiCorp provides a unified interface to secrets while providing tight access control plus recording a detailed audit log.

Alternatives to secret management

  • Chapter III. of the “twelve-factor app” recommends storing config in environment variables. The usual mechanism has been in a clear-text file loaded into operating system variables, such as:

    docker run -e VARNAME=mysecret ...

    PROTIP: However, this is NOT cool anymore because the value of variables (secrets) can end up in logs. All processes have access to secrets (no RBAC). And this mechanism makes periodic key rotation manually cumbersome.

  • Docker Secrets was NOT designed for unlicensed (free) standalone containers*, but for Enterprise licensed (paid) Docker Swarm services in commands such as:

    docker service create --secret db_pass --name secret-test alpine bash

    db_pass is a file (with .txt extension) encrypted by a command such as:

    echo "mysecret" | docker secret create db_pass -
     docker secret ls

    Secrets are stored in Docker’s logging infra within its “Raft” distributed leader consensus mechanism shared with Swarm managers. So encryption needs to be locked in Swarm.

    Secrets can be added to a running service, but key rotation requires container restart.

    When the service is created (or updated), the secret is mounted onto the container in the /run/secrets directory which custom program can retrieve*

    def get_secret(secret_name):
      try:
          with open('/run/secrets/{0}'.format(secret_name), 'r') as secret_file:
              return secret_file.read()
      except IOError:
          return None
     
    database_password = get_secret('db_pass')
     
  • Kubernetes secrets are stored in its etcd process.

    --experimental-encryption-provider-config

    https://github.com/Boostport/kubernetes-vault

  • Cloud-base KMS (Key Management Service) such as from Amazon

  • The Aqua utility provides secrets management to orchestrators so that:

    docker run -it --rm -e SECRET={dev-vault.secret/password} \
     --name ubuntu ubuntu /bin/bash
    docker inspect ubuntu -f ""

    returns:

    ["SECRET={dev.vault-secret/password}","PATH=/usr/local/sbin:..."]

Secrets handling best practices

VIDEO:

  1. Don’t let authentication secrets live forever. Use single-use token with short TTL (Time To Live)
  2. Distribute authentication secrets securely.
  3. Limit exposure if auth secrets disclosed. Use Least Privilege.
  4. Have a “break-glass” procedure if auth secrets are stolen.
  5. Detect unauthorized access to auth secrets. App alert if secret is absent or not good.

Vault Skill Certification

In 2020 HashiCorp offers (for just $70) an on-line certification exam for Vault. Answer 57 questions in 60 minutes. You must wait 7 days between exam attempts. You can only attempt an exam 4 times total in a one year period. If you fail 3 exams, you must wait 365 days after your last exam to retake it again.

1 Compare authentication methods

  • Describe authentication methods
  • Choose an authentication method based on use case
  • Differentiate human vs. system auth methods

2 Create Vault policies

  • Illustrate the value of Vault policy
  • Describe Vault policy syntax: path
  • Describe Vault policy syntax: capabilities
  • Craft a Vault policy based on requirements

3 Assess Vault tokens

  • Describe Vault token
  • Differentiate between service and batch tokens. Choose one based on use-case
  • Describe root token uses and lifecycle
  • Define token accessors
  • Explain time-to-live
  • Explain orphaned tokens
  • Create tokens based on need

4 Manage Vault leases

  • Explain the purpose of a lease ID
  • Renew leases
  • Revoke leases

5 Compare and configure Vault secrets engines

  • Choose a secret method based on use case
  • Contrast dynamic secrets vs. static secrets and their use cases
  • Define transit engine
  • Define secrets engines

6 Utilize Vault CLI

  • Authenticate to Vault
  • Configure authentication methods
  • Configure Vault policies
  • Access Vault secrets
  • Enable Secret engines
  • Configure environment variables

7 Utilize Vault UI

  • Authenticate to Vault
  • Configure authentication methods
  • Configure Vault policies
  • Access Vault secrets
  • Enable Secret engines

8 Be aware of the Vault API

  • Authenticate to Vault via Curl
  • Access Vault secrets via Curl

9 Explain Vault architecture

  • Describe the encryption of data stored by Vault
  • Describe cluster strategy
  • Describe storage backends
  • Describe the Vault agent
  • Describe secrets caching
  • Be aware of identities and groups
  • Describe Shamir secret sharing and unsealing
  • Be aware of replication
  • Describe seal/unseal
  • Explain response wrapping
  • Explain the value of short-lived, dynamically generated secrets

10 Explain encryption as a service

Prep:

  • https://www.whizlabs.com/blog/hashicorp-vault-certification/
  • https://medium.com/bb-tutorials-and-thoughts/how-to-pass-hashicorp-vault-associate-certification-c882892d2f2b
  • STAR: https://medium.com/bb-tutorials-and-thoughts/200-practice-questions-for-hashicorp-vault-associate-certification-ebd7f7d27bc0
  • https://www.linkedin.com/pulse/how-pass-hashicorp-vault-associate-certification-yassine-n-/

Vault Operations Professional exam

HashiCorp’s Vault Operations Pro Certification is a $295 4-hour hands-on lab-based as well as multiple-choice. The $295 exam fee includes a free retake after 7 days but within 3 months.

1 Create a working Vault server configuration given a scenario

  • 1a Enable and configure secret engines
  • 1b Practice production hardening
  • 1c Auto unseal Vault
  • 1d Implement integrated storage for open source and Enterprise Vault
  • 1e Enable and configure authentication methods
  • 1f Practice secure Vault initialization
  • 1g Regenerate a root token
  • 1h Rekey Vault and rotate encryption keys

2 Monitor a Vault environment

  • 2a Monitor and understand Vault telemetry
  • 2b Monitor and understand Vault audit logs
  • 2c Monitor and understand Vault operational logs

3 Employ the Vault security model

  • 3a Describe secure introduction of Vault clients
  • 3b Describe the security implications of running Vault in Kubernetes

4 Build fault-tolerant Vault environments

  • 4a Configure a highly available (HA) cluster
  • 4b [Vault Enterprise] Enable and configure disaster recovery (DR) replication
  • 4c [Vault Enterprise] Promote a secondary cluster

5 Understand the hardware security module (HSM) integration

  • 5a [Vault Enterprise] Describe the benefits of auto unsealing with HSM
  • 5b [Vault Enterprise] Describe the benefits and use cases of seal wrap (PKCS#11)

6 Scale Vault for performance

  • 6a Use batch tokens
  • 6b [Vault Enterprise] Describe the use cases of performance standby nodes
  • 6c [Vault Enterprise] Enable and configure performance replication
  • 6d [Vault Enterprise] Create a paths filter

7 Configure access control

  • 7a Interpret Vault identity entities and groups
  • 7b Write, deploy, and troubleshoot ACL policies
  • 7c [Vault Enterprise] Understand Sentinel policies
  • 7d [Vault Enterprise] Define control groups and describe their basic workflow
  • 7e [Vault Enterprise] Describe and interpret multi-tenancy with namespaces

8 Configure Vault Agent

  • 8a Securely configure auto-auth and token sink
  • 8b Configure templating

Vault Agent on laptops

A Vault Agent is a client daemon that provides:

  • Automatic authentication to Vault – manage the token renewal process for locally-retrieved dynamic secrets.

  • Templating – rendering of user supplied templates, using the token generated by the Auto-Auth step.

  • Secure delivery/storage of tokens

  • Caching of client-side responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens.

  • Lifecycle management (renewal and re-authentication) of tokens

Secrets Engines

Vault can work with many Secrets Engines selected in the GUI:

vault-secrets-engines-1460x1048

A protocol for Auth Methods is selected by each user (if configured):

vault-sign-in-878x646

vault auth list

  1. On developer machines, the GitHub auth method (auth/github) is easiest to use.

    vault auth enable github

  2. Log into Vault using the vault CLI:

    vault login -method=github token="${VAULT_TOKEN}
    

    The Vault “cubbyhole” is each user’s private “locker”. All secrets are namespaced under a token. When that token expires or is revoked, all the secrets in its cubbyhole are revoked with it. Even the root user cannot reach into a cubbyhole.

    However, secrets in the key/value secrets engine are accessible to other tokens if its policy allows it.

    To provide cover by ensuring that the value being transmitted across the wire is not the actual secret (but a reference to the secret), Vault’s cubbyhole response wrapping is used where the initial token is stored in the cubbyhole secrets engine. The wrapped secret can be unwrapped using the single-use wrapping token. Even the user or the system created the initial token won’t see the original value.

    This mechanism provides malfeasance detection by ensuring that only a single party can ever unwrap the token and see what’s inside (given a limited time).

  3. Configure GitHub engineering team authentication to be granted the default and application policies:

    vault write auth/github/map/teams/engineering value=default,applications

AppRole

For servers, the AppRole method is recommended. It uses role_id and secret_id for login.

  • If the SecretID used for login is fetched from an AppRole, that is operating in Pull mode.
  • If a “custom” SecretID is set against an AppRole by the client, that’s Push mode.
  1. Log in with AppRole:

    curl --request POST --data @payload.json \
    http://127.0.0.1:8200/v1/auth/approle/login
    

Tokens with attached policies

Within Vault, tokens map to information. The information mapped to each token is a set of one or more attached policies. Policies control what is allowed to be done with that information.

  • Service tokens support common features such as renewal, revocation, creating child tokens, and more. They are tracked and thus replicated, so are considered “heavyweight”.

  • Batch tokens can’t be renewed (can’t have an explicit max TTL), so requires no storage on disk to track and replicate, so are “lightweight” and scalable. Batch tokens can’t be root tokens and can’t be used to create tokens.

  1. The admin who manages secrets engines needs to be given a policy with capabilities on mounts (of secrets engines):

    path "sys/mounts/*" {
      capabilities = ["create", "read", "update", "delete", "list", "sudo"]
    }
    

    sudo capabilities allows access to paths that are root-protected. Root tokens have the root policy attached to them. They are created at vault operator init so they can do anything in Vault, and never expire (without any renewal needed). As a result, it is purposefully hard to create root tokens. It is good security practice for there to be multiple eyes on a terminal whenever a root token is used and then revoked immediately after tasks are completed.

    This path sys/rotate requires a root token or sudo capability in the policy.

  2. Each policy defines a list of paths. Each path expresses the capabilites that are allowed.

    path "secret/data//*" {   
    capabilities = ["create", "update", "read", "delete"]
    required_parameters = ["bar", "baz"] 
    }  
    path "secret/metadata//*" {   
    capabilities = ["list"]
    }
    
  3. Permissions to configure Transit Secrets Engine:

    # Enable transit secrets engine
    path "sys/mounts/transit" {
      capabilities = [ "create", "read", "update", "delete", "list" ]
    }
    # To read enabled secrets engines
    path "sys/mounts" {
      capabilities = [ "read" ]
    }
    # Manage the transit secrets engine
    path "transit/*" {
      capabilities = [ "create", "read", "update", "delete", "list" ]
    }
    
  4. To configure configure Transit Secrets Engine via CLI:

    vault secrets enable transit

    Alternately, to enable transit via API call using Curl:

    curl --header "X-Vault-Token: TOKEN" \
        --request POST \
        --data PARAMETERS \
        VAULT_ADDRESS/v1/sys/mounts/PATH
    

    Alternately, via UI at http://127.0.0.1:8200/ui/vault/auth?with=token see https://learn.hashicorp.com/tutorials/vault/eaas-transit

  5. To be able to list existing policies:

path "sys/policies/acl" {
  capabilities = ["list"]
}
   
  1. List all the registered policies:

    vault read sys/policy
  2. Encrypt plaintext in Base64 using Transit Engine in key_ring_name “orders” (example from Bhargav):

    vault write transit/encrypt/orders plaintext=$(base64 <<< "1234 4564 2221 5562")
    

    The ciphertext returned is prefixed with vault:v1: so that when you decrypt this ciphertext, you now to use Vault and v1 (version 1) of the encryption key.

  3. Rotate the encryption key version:

    vault write transit/keys/orders/rotate
    
  4. Decrypt the cyphertext:

    vault write transit/decrypt/orders ciphertext="vault:v2:XdEG7SKvaTFOwgi4bdrAy1ftxNw6QYR2Y82vWnOoMnvIkQLZeU419qWVCXuABCD"
    

Backend

From https://www.vaultproject.io/docs/internals/architecture

vault-layers

When the Vault server is started, it must be provided with a storage backend so that data is available across restarts. Similarly, the HTTP API service must be started by the Vault server on start so that clients can interact with it.

https://hashicorp.github.io/field-workshops-vault/slides/multi-cloud/vault-oss

https://hashicorp.github.io/field-workshops-vault/slides/multi-cloud/vault-oss/index.html is the slidedeck HashiCorp Sales Engineers use for a high-level presentation.

The Vault Database secrets engine generates dynamic, time-bound credentials for many different databases. Instruqt course “Vault Dynamic Database Credentials” (by Roger Berlind) walks you through the generation of dynamic credentials for a MySQL database that runs on the same server program as the Vault server itself.

Vault SaaS (HCP) first time with Dev public IP

BLOG: Start iteracting with a Vault instance, even on a Chromebook, by getting a Vault cloud instance:

  1. At https://www.vaultproject.io click Try Cloud.
  2. Obtain an account using your email and password.
  3. Define an Organization for $50 of Trial credits until you have to provide your credit card.
  4. At the “Overview” page, click “Deploy Vault” for your organization.
  5. Create a Vault Cluster ID (named “vault-cluster” by default).
  6. Note the Network region (such as “Oregon us-west-2”)
  7. Select “Allow public connections from outside your selected network” since you’re in dev. mode this time.
  8. Pricing:
    • To start with, select Vault tier: “Development” to be associated with an “Extra Small” Cluster size of 2 vCPU/ 1GiB RAM for $0.30/hr = $7.20/day = $216/month (of 30 days) = $2,592/year
    • Starter of $0.50/hr = $12/day = $360/month = $4,320/year
    • Standard of $1.578/hr = $37.87/day = $1,136.16/month = $13,633.92/year
    • Plus of $1.844/hr = $44.26/day = $1327.68/month = $15,932.16/year

  9. Confirm Network settings (such as CIDR block 172.25.16/0/20, a non-routable address space).
  10. Click “Create cluster” to see at https://portal.cloud.hashicorp.com show “Cluster initializing” turn (in 5-10 minutes).
  11. PROTIP: Delete the cluster while you study the configuration process.

    If you selected “Development” at $0.30/hour, that $50 trial gets you about 7 days of run time.

    Configure for Authentication

  12. Configure at least one audit device to write log (before completing the request):

    vault audit enable file file_path=/var/log/vault_audit.log

    AWS

    A prerequisite are AWS Credentials to an AWS account.

    Tutorial: Deploy HCP Valut with Terraform example scenario automatically deploys an AWS VPC, and peers it with your HashiCorp Virtual Network (HVN).

  13. To learn Vault configuration, view VIDEO: A Quickstart Guide

    Connection between AWS VPC and HCP HVN is using VPC Peering.

  14. Read HCP Vault documentation at:

    https://cloud.hashicorp.com/docs/vault

  15. Click “Manage” to Import to Terraform:

    terraform import hcp_vault_cluster.<RESOURCE_NAME> vault-cluster
  16. Click “Access Vault” for “Command-line (CLI)”.
  17. Click “Use public URL” and click the copy icon to save to your Clipboard, for example:

    export VAULT_ADDR="https://vault-cluster.vault.a17838e5-60d2-4e49-a43b-cef519b694a5.aws.hashicorp.cloud:8200"; 
    export VAULT_NAMESPACE="admin"
    
  18. Paste the value ???

  19. Authenticate to Vault at https://www.vaultproject.io/docs/concepts/auth#authenticating

    export VAULT_TOKEN=[ENTER_TOKEN_HERE]
  20. https://learn.hashicorp.com/tutorials/vault/getting-started-apis

Provision a Dev Vault Cluster on AWS with Terraform


Replication

For organizations with infrastructure that spans multiple datacenters/regions, Vault provides identity management, secrets storage, and policy management that is highly available and scaleable as the number of clients and their functional needs increase. At the same time, a common set of policies need to be enforced globally, with a consistent set of secrets and keys are exposed to applications so they can interoperate.

Vault has two approaches to replicate its secrets in secondaries, transparently to the client:

  • In performance replication, secondaries keep track of their own tokens and leases but share the underlying configuration, policies, and supporting secrets (K/V values, encryption keys for Transit, etc). When a user action modifies an underlying shared state, the secondary forwards the request to the primary to be handled. If the primary fails, the secondary cannot take over.

  • In disaster recovery (or DR) replication, secondaries do not handle client requests, but are on stand-by to continue operations, on the election of the DR secondary, with applications connecting to the original primary when it fails.


Install Consul server

To provision a Quick Start Vault & Consul Cluster on AWS with Terraform

Consul coordinates several instances of Vault server software.

Using HashiCorp’s Consul as a backend to Vault provides durable storage of encrypted data at rest necessary for fault tolerance, availability, and scalability.

valut-consul-flow

  1. Consul Cluster server configurtion sample file /etc/consul.d/server/consul-node.json, replacing all caps with your own values:

    {
      "server": true
      "node_name": "NODENAME",
      "datacenter": "DATACENTERNAME",
      "data_dir": "/var/consul/data",
      "bind_addr": "0.0.0.0",
      "client_addr": "0.0.0.0",
      "domain": "HOSTAME.com",
      "advertise_addr": "IPADDR",
      "bootstrap_expect": 5,
      "retry_join": ["provider=aws tag_key=consul tag_value=true"],
      "ui": true,
      "log_level": "DEBUG",
      "enable_syslog": true,
      "primary_datacenter": "DATACENTERNAME",
      "acl": {
      "enabled": true,
      "default_policy": "allow",
      "down_policy": "extend-cache"
      },
      "node_meta": {
      "zone": "AVAILABILITYZONE"
      }
      "autopilot":{  # Enterprise feature
      "redundancy_zone_tag": "zone"
      }
    }
    
  2. To see log entries:

    sudo tail -F /var/log/messages
  3. Take a snapshot used to restore:

    consul snapshot save yymmdd-svr1.snap

    Response:

    Saved and verified snapshot to index 123
  4. Inspect the snapshot:

    consul snapshot inspect yymmdd-svr1.snap

    Response is an ID, Size, Index, Term, Version.

Nomad

HashiCorp Nomad passes secrets as files.

It polls for changed values. Tasks get tokens so they can retrieve values.

Using Envconsul with GitHub

Envconsul is launched as a subprocess (daemon) which retrieves secrets using REST API calls of KV (Key Value) pairs in Vault/Consul based on “configuration files” specified in the HashiCorp Configuration Language.

It works on many major operating systems with no runtime requirements. On MacOS:

brew install envconsul
   envconsul -v
v0.9.2 ()

For the full list of command-line options:

envconsul -h

Envconsul is also available via a Docker container for scheduled environments.

Secrets are requested based on a specification of secrets to be fetched from HashiCorp Vault based on a configuration file. A sample of its contents is this, which requests the api-key field of the secret at secret/production/third-party:

    production/third-party#api-key

Credentials authorizing retrieval requests are defined …


Within App Programming Code

Even though the “12 Factor App” advocates for app programming code to obtain secret data from environment variables (rather than hard-coding them in code stored within GitHub). So, populating environment variables with clear-text secrets would occur outside the app, in the run-time environment. Seveal utilities have been created for that:

  • The Daytona Golang CLI client from Cruise (the autonomous car company) at https://github.com/cruise-automation/daytona is written in Golang to be a “lighter, alternative, implementation” Vault client CLI services and containers. It automates authentication, fetching of secrets, and token renewal to Kubernetes, AWS, and GCP. Daytona is performant because it pre-fetches secrets upon launch and store them either in environment variables, as JSON in a specified file, or as singular or plural secrets in a specified file.

Instruqt Basic Course

HashiCorp provides hands-on courses at https://play.instruqt.com/login.

After given 30-day access to the Vault Basics course, its lessons are for running in dev mode:

NOTE: Labs timeout every 2 hours.

  • The Vault CLI - Run the Vault Command Line Interface (CLI).
  • Your First Secret - Run a Vault dev server and write your first secret.

    https://www.vaultproject.io/api-docs/index

  • The Vault API - Use the Vault HTTP API

    curl http://localhost:8200/v1/sys/health | jq
     

    Response includes “cluster” only if Vault was setup as a cluster:

    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                   Dload  Upload   Total   Spent    Left  Speed
    0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
    "initialized": true,
    "sealed": false,
    "standby": false,
    "performance_standby": false,
    "replication_performance_mode": "disabled",
    "replication_dr_mode": "disabled",
    "server_time_utc": 1591126861,
    "version": "1.2.3",
    "cluster_name": "vault-cluster-2a4c0e97",
    "cluster_id": "0b74ccb6-8cee-83b8-faa6-dc7355481e4b"
    }
    100   294  100   294    0     0  49000      0 --:--:-- --:--:-- --:--:-- 58800
     
    curl --header "X-Vault-Token: root" http://localhost:8200/v1/secret/data/my-first-secret | jq

    Response:

      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                   Dload  Upload   Total   Spent    Left  Speed
    100   289  100   289    0     0  32111      0 --:--:-- --:--:-- --:--:-- 32111
    {
    "request_id": "1fbb67f5-04a2-5db1-06b4-8210a6959565",
    "lease_id": "",
    "renewable": false,
    "lease_duration": 0,
    "data": {
      "data": {
        "age": "100"
      },
      "metadata": {
        "created_time": "2020-06-02T19:36:39.21280375Z",
        "deletion_time": "",
        "destroyed": false,
        "version": 1
      }
    },
    "wrap_info": null,
    "warnings": null,
    "auth": null
    }
     
  • Run a Production Server - Configure, run, initialize, and unseal a production mode Vault server.

Vault Initialization

  • https://www.vaultproject.io/docs/concepts/seal/

  1. Production servers are configured by a vault-config.hcl file (in folder /vault/config) read by server init command

    vault server -config=/vault/config/vault-config.hcl
    vault operator init -key-shares=1 -key-threshold=1
    

    REMEMBER: Vault command parameters have a single dash, not a double-dash.

    Vault init generates (using Shamir algorithm) 5 Unseal -key-shares, of which a -key-threshold quorum of 3 different employees are needed to Unseal the server to generate a Master encryption key, which is used to protect (encrypt) Data Encryption keys stored with data encrypted.

    Each shard can be encrypted with a different PGP key for each person with a shard.

  2. Repeat vault operator unseal to input each shard key.

    The Root Token is used to initialize Vault, then thrown away.

    The server is unsealed until it’s restarted or if Vault’s backend storage layer encounters an unrecoverable error.

  3. Initialize the “vault” cluster the first time:

    vault operator init
  4. Alternately, use Cloud Auto Unseal by retrieving a Master Key by supplying a Key ID stored in a HSM within a cloud (AWS KMS, Google Cloud KMS, Azure Key Vault, etc.). For example, in the Vault config file:

    seal "awskms" {
      region = "us-east-1"
      # access_key = "AKIA..."  # use IAM Service Role instead
      # secret_key = "..."
      kms_key_id = "abcd123-abcd123-abcd123-abcd123-abcd123"
      endpoint = "vpc endpoint"
    }
    

    PROTIP: Store Vault configuration files at /etc/vault.d/vault.hcl

    NOTE: The Master Key remains memory-resident in a Vault Node memory and not stored.

    Transit Engine

    Vault’s Transit Secrets engine handles cryptographic functions on data-in-transit and doesn’t store data sent to it, so it is called an “encryption as a service”.

  5. Alternately, use Vault Transit Unseal by referencing a separate (leveraged) HA central Core Vault Cluster running the Vault Transit engine configured with this example:

    seal "transit" {
      address = "https://vault:8200"
      token = "s.QsGo2dfFGqIIOCLFWFE"
      disable_renewal = "false"
      // Key configuration:
      key_name = "transit_key_name"
      mount_path = "transit/"
      namespace = "nsl/"
      // TLS Configuration:
      tls_ca_cert = "/etc/vault/ca_cert.pem"
      tls_client_cert = "/etc/vault/client_cert.pem"
      tls_client_key = "/etc/vault/ca_cert.pem"
      tls_server_name = "vault"
      tls_skip_verify = "false"
    }
    

    Instruqt course “Vault Encryption as a Service” shows how Vault’s Transit secrets engine provides encryption as a service.

  6. Enable a secret transit engine:

    vault secrets enable transit
  7. Other configuration stanzas:

    listener "tcp" {
      address = "0.0.0.0:8200"  # all machines
      cluster_address = "0.0.0.8:8201"
      tls_disable = "true"  # only in dev (not in PROD)
      # tls_cert_key & tls_cert_file
    }
    // backend:
    storage "consul" {
      address = "127.0.0.1:8500"  # locally
      path = "vault/"
    }
    // Where to publish metrics to upstream systems:
    telemetry {
      ...
    }
    log_level = "info"
    api_addr = "https://IPADDRESS:8200"
    ui = true
    cluster_name = "my_cluster"
    
  8. After Vault is running, use the UI to configure:

    • Secrets Engine
    • Authentication Methods
    • Audit Devices
    • Policies
    • Entities & Groups

    VAULT_TOKEN

    Save the unseal key response and an initial root token to set the “VAULT_TOKEN” environment variable, using the initial root token that the “init” command returned:

    export VAULT_TOKEN="$root_token"

    You next need to unseal your Vault server, providing the unseal key that the “init” command returned: vault operator unseal

    This will return the status of the server which should show that “Initialized” is “true” and that “Sealed` is “false”.

    To check the status of your Vault server at any time, you can run the “vault status” command. If it shows that “Sealed” is “true”, re-run the “vault operator unseal” command.

    Finally, log into the Vault UI with your root token. If you have problems, double-check that you ran all of the above commands.

  • Enable and use an instance of HashiCorp’s KV v2 Secrets engine (the default when running in dev mode):

    vault secrets enable -version=2 kv

    Alternately:

    vault secrets enable kv-v2
  • Use the Userpass Auth Method - Enable and use the userpass authentication method.

  • Use Vault Policies - Use Vault policies to grant different users access to different secrets. “Least privilege” correspond to HTTP verbs:

    path "secret/dev/team-1/*" {
    capabilities = ["create", "read", "list", "update", "delete"]
    }
     

Using Vaultenv with GitHub

https://github.com/channable/vaultenv populates values in OS environment variables referenced within programming code by making a syscall from the exec family. Vaultenv replaces its own process with your app. After your service has started, vaultenv does not run anymore.

Vaultenv retrieves secrets using REST API calls of KV (Key Value) pairs based on “behavior configuration files” specified in the following files traveling with the programming code:

  • $CWD/.env (as popularized by Ruby gems)
  • /etc/vaultenv.conf
  • $HOME/.config/vaultenv/vaultenv.conf

CAUTION: When secrets in Vault change, Vaultenv does not automatically restart services. By comparison, envconsul from HashiCorp (also describe here), daemonizes and spawns child processes to manage the lifecycle of the process it provides secrets.

Within its configuration file, secrets are requested based on a specification of secrets to be fetched from HashiCorp Vault, such as this requesting the api-key field of the secret at secret/production/third-party.

    production/third-party#api-key

The utility is written in the Haskell language under a 3-clause BSD license and releases run on Linux (has not been tested on any other platform, such as macOS).

Install Server binaries

Precompiled Vault binaries are available at https://releases.hashicorp.com/vault

PROTIP: Enterprise and free versions have different binaries. Paid Enterprise editions include Read Replicas and Replication for DR, plus MFA, Sentinel, and HSM Auto-Unseal with FIPS 140-2 & Seal Wrap. A system service file is needed for prod instances.

PROTIP: Vault has a single program file for server and client.

There are several ways to obtain a running instance of HashiCorp Vault, listed from easiest to most difficult:

CAUTION: If you are in a large enterprise, confer with your security team before installing. They often have a repository such as Artifactory or Nexus where installers are available after being vetted and perhaps patched for security vulnerabilities.

See https://github.com/hashicorp/vault-guides and https://devopstales.github.io/linux/hashicorp-vault/

A. Vault cloud service

  • Azure Vault (https://jpvelasco.com/test-driving-the-azure-key-vault-client-samples/)

B. Use Homebrew to install Vault locally on MacOS.

C. Pull an image from Docker Hub

D. Download from HashiCorp to install locally.

E. Use a Dockerfile to build your own Docker image. if you’re not using vault frequently, and want to get the latest when you do.


A. Vault Cloud service

Vault is an open source tool that can be deployed to any environment. It is well suited for cloud environments where HSMs are either not available or are cost prohibitive.

  1. Create within your internal cloud, Google Cloud, Amazon EC2, Microsoft Azure, etc. a VM instance of an Ubuntu server. 4 GB RAM and 10 GB drive is the minimum.

    A sample command to create a Google Cloud instance:

    THIS_PROJECT_NAME="woohoo1"
    THIS_INSTANCE_NAME="wildone"
    GCP_ACCT="mememe"
    gcloud beta compute --project "${THIS_PROJECT_NAME}" instances create "${THIS_INSTANCE_NAME}" --zone "us-central1-f" --machine-type "n1-standard-1" --subnet "default" --maintenance-policy "MIGRATE" --service-account "{$GCP_ACCT}@developer.gserviceaccount.com" --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --min-cpu-platform "Automatic" --tags "http","https","web","http-server","https-server" --image "ubuntu-1604-xenial-v20171026a" --image-project "ubuntu-os-cloud" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "${THIS_INSTANCE_NAME}"
    

Docker Compose of Vault server

brew install docker
brew install docker-compose  # now a plug-in to docker
cd; mkdir -p projects/vault; cd ~/projects/vault
git clone https://github.com/ryanhartje/containers.git
cd containers/consul-vault/
docker-compose up -d

B. Homebrew on MacOS

If you’re going to be using Vault a lot on your Mac, install using Homebrew:

  1. In a Terminal…
  2. See that there are several packages with the name “vault”:

    brew search vault

    Note there are several:

    ==> Formulae
    ==> Formulae
    argocd-vault-plugin        ssh-vault                  vaulted
    aws-vault                  vault ✔                    vultr
    hashicorp/tap/vault ✔      vault-cli
     
    ==> Casks
    aws-vault                  btcpayserver-vault         gmvault
     
    If you meant "vault" specifically:
    It was migrated from homebrew/cask to homebrew/core.
    
  3. Verify the source:

    brew info vault

    At time of this writing:

    vault: stable 1.9.4 (bottled), HEAD
    Secures, stores, and tightly controls access to secrets
    https://vaultproject.io/
    /usr/local/Cellar/vault/1.9.2 (8 files, 178.7MB) *
      Poured from bottle on 2021-12-23 at 23:17:56
    From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/vault.rb
    License: MPL-2.0
    ==> Dependencies
    Build: go ✘, gox ✘, node@14 ✘, yarn ✔
    ==> Options
    --HEAD
     Install HEAD version
    ==> Caveats
    To restart vault after an upgrade:
      brew services restart vault
    Or, if you don't want/need a background service you can just run:
      /usr/local/opt/vault/bin/vault server -dev
    ==> Analytics
    install: 11,140 (30 days), 32,117 (90 days), 122,131 (365 days)
    install-on-request: 10,818 (30 days), 31,203 (90 days), 118,906 (365 days)
    build-error: 2 (30 days)
    

    Compare growth from a previous version:

    vault: stable 1.9.2 (bottled), HEAD
    ...
    install: 9,528 (30 days), 29,343 (90 days), 116,531 (365 days)
    install-on-request: 9,273 (30 days), 28,580 (90 days), 113,456 (365 days)
    build-error: 6 (30 days)
    

    Notice from above that “Go” is a pre-requisite. So…

  4. Install pre-requisite Go language:

  5. Install Vault client on MacOS using Homebrew:

    brew install vault
    vault 1.9.2 is already installed but outdated (so it will be upgraded).
    ==> Downloading https://ghcr.io/v2/homebrew/core/vault/manifests/1.9.4
    ######################################################################## 100.0%
    ==> Downloading https://ghcr.io/v2/homebrew/core/vault/blobs/sha256:0e71de8e8d51
    ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh
    ######################################################################## 100.0%
    ==> Upgrading vault
      1.9.2 -> 1.9.4 
     
    ==> Pouring vault--1.9.4.monterey.bottle.tar.gz
    ==> Caveats
    To restart vault after an upgrade:
      brew services restart vault
    Or, if you don't want/need a background service you can just run:
      /usr/local/opt/vault/bin/vault server -dev
    ==> Summary
    🍺  /usr/local/Cellar/vault/1.9.4: 8 files, 179.4MB
    ==> Running `brew cleanup vault`...
    Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
    Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
    Removing: /usr/local/Cellar/vault/1.9.2... (8 files, 178.7MB)
    

    Compare historically:

    🍺  /usr/local/Cellar/vault/1.9.2: 8 files, 178.7MB
    ...
    🍺  /usr/local/Cellar/vault/1.3.2: 6 files, 124.2MB
      Built from source on 2019-11-18 at 05:05:44
    
  6. The great thing with Homebrew is you can upgrade and uninstall easily.

    brew upgrade vault
    
  7. Verify version

    vault --version
    

    At time of writing, the response:

    Vault v1.9.4 ('fcbe948b2542a13ee8036ad07dd8ebf8554f56cb+CHANGES')
    
  8. Where is the release simver among History of releases on GitHub?

    https://github.com/hashicorp/vault/releases

  9. Verify location:

    which vault

    Result:

    /usr/local/bin/vault
  10. Persist the version of Vault for use in commands by editing ~/.bash_profile to add these lines:

    export VAULT_VERSION="1.9.4"
    complete -C /usr/local/bin/vault vault
    
  11. Install auto completions:

    vault -autocomplete-install
    

    No message is returned. Otherwise, if already installed once:

    Error executing CLI: 2 errors occurred:
     * already installed in /Users/wilsonmar/.bash_profile
     * already installed in /Users/wilsonmar/.zshrc
    
  12. See menu of commands by running the command without parameters:

    vault
    

    Response:

    Usage: vault <command> [args]
     
    Common commands:
     read        Read data and retrieves secrets
     write       Write data, configuration, and secrets
     delete      Delete secrets and configuration
     list        List data or secrets
     login       Authenticate locally
     agent       Start a Vault agent
     server      Start a Vault server
     status      Print seal and HA status
     unwrap      Unwrap a wrapped secret
     
    Other commands:
     audit          Interact with audit devices
     auth           Interact with auth methods
     debug          Runs the debug command
     kv             Interact with Vault's Key-Value storage
     lease          Interact with leases
     monitor        Stream log messages from a Vault server
     namespace      Interact with namespaces
     operator       Perform operator-specific tasks
     path-help      Retrieve API help for paths
     plugin         Interact with Vault plugins and catalog
     policy         Interact with policies
     print          Prints runtime configurations
     secrets        Interact with secrets engines
     ssh            Initiate an SSH session
     token          Interact with tokens
    

    NOTE: monitor was added recently.

    Vault commands are described here online.

  13. Restart your Terminal.app (and provide password):

    exec $SHELL
    

    Auto-complete

  14. Use autocomplete by typing vault k

    Auto-complete is working if you can press tab to complete:

    vault kv
    

    Vault kv store commands

    PROTIP: https://www.vaultproject.io/docs/commands/index.html

    VIDEO: HashiCorp Vault Http API - Create and get secrets with curl (aweful drawings)

  15. Add a key:

    vault kv put hello/api username=john
    

    If Vault is not running, you’ll see a response such as this:

    Error making API request.
     
    URL: GET https://vault.whatever-engine.com:8200/v1/sys/internal/ui/mounts/hello/api
    Code: 503. Raw Message:
     
    <html>
    <head><title>503 Service Temporarily Unavailable</title></head>
    <body>
    <center><h1>503 Service Temporarily Unavailable</h1></center>
    </body>
    </html>
    
  16. List keys and values:

    vault kv list hello
    
  17. Retrieve a keys and values:

    vault kv get hello/api 
    
  18. Delete a key’s metadata:

    vault kv metadata delete hello/api 
    
  19. Delete a key:

    vault kv delete hello/api 
    

    Vault secret engine commands

    VIDEO:

  20. Enable the AWS secrets engine:

    vault secrets enable aws
    

    The expected response:

    Success! Enabled the aws secrets engine at: aws/

    See https://www.vaultproject.io/docs/secrets/kv/kv-v2/

  21. Enable for writing the root account within the AWS secrets engine in the CLI:

    vault write aws/config/root \
     access_key=1234567890abcdefg \
     secret_key=... \
     region=us-east-1
    

Configure Vault

VIDEO: HashiCorp Vault on Azure [13:24] by Yoko Hyakuna.

https://github.com/Voronenko/hashi_vault_utils provides command scripts and commentary.

A sample config-file.hcl contains:

ui = true
   disable_mlock = true
 
   # use the file backend
   storage "file {
      path = "data"
   }
   listener "tcp" {
      address = "0.0.0.0:8200"
      tls_disable = 1
   }
   

VIDEO: How does Vault encrypt data?

Consul

BLOG: To use Consul as the storage backend, download and install it on each node in the cluster, along with these different stanzas:

storage "consul" {
   address = "127.0.0.1:8500"
   path = "vault/"
}
listener "tcp" {
   address = "0.0.0.0:8200"
   cluster_address = "0.0.0.:8201"
   tls_cert_file = "/etc/certs/"
   tls_cert_key = "/etc/certs/vaultkey"
}
seal "awskms" {
   region = "us-east-1"
   kms_key_id = "f3459282-439a-b233-e210-3487b77c7e2"
}
api_addr = "https://10.0.0.10:8200"
ui = true
cluster_name = "my_cluster"
log_level = "info"

Build Docker image using Dockerfile

Create Vault within a Docker image from scratch:

https://computingforgeeks.com/install-and-configure-vault-server-linux/

  1. Install Git in the Linux server:

    apt-get update && apt-get install -y \
      git
    

    https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html

  2. Use Git to obtain the Dockerfile based on the Spring Cloud Vault sample app

    git clone https://github.com/???/vault.git --depth=1 
    cd vault
    
  3. Create a docker image locally:

    sudo docker build -f Dockerfile -t demo:vault . 
    

    This would run Maven, and a test job.

    If you get a message: “unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /Users/…/projects/vault/Dockerfile: no such file or directory

  4. Run the Dockerfile at:

    https://raw.githubusercontent.com/???/Vault/master/Dockerfile

    Its contains:

    FROM ubuntu:16.04
    RUN apt-get update
    RUN apt-get update && apt-get install -y \
      default-jre \
      default-jdk \
      git \
      maven 
     
    RUN mvn -version
    RUN git clone https://github.com/hashicorp/vault???.git --depth=1
    

    The above provides commands to install Vault within a blank Docker container.

    Vault-jvm/examples/sample-app is a simple sample app, which is replaced with a real app in the real world.

C. Use Docker image

From https://www.vaultproject.io/docs/concepts/pgp-gpg-keybase Since Vault 0.3, Vault can be initialized using PGP keys. In this mode, Vault will generate the unseal keys and then immediately encrypt them using the given users’ public PGP keys. Only the owner of the corresponding private key is then able to decrypt the value, revealing the plain-text unseal key.

First, create, acquire, or import the appropriate key(s) onto the local machine from which you are initializing Vault.

On a macOS:

  1. Add Docker’s public GPG key for the Trusty version:

    sudo apt-get install -y xserver-xorg-lts-trusty libgl1-mesa-glx-lts-trusty
    

On a Linux server instance’s Terminal CLI:

  1. Add Docker’s public GPG key for the Trusty version:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    </strong>

    OK is the expected response.

  2. View the Linux version code referenced in a later command:

    lsb_release -cs
    

    This returns stretch for Debinan and xenial for Ubuntu.

  3. Install Docker for Ubuntu (not Debian):

    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    
  4. Update repository:

    sudo apt-get update
    
  5. List policies:

    apt-cache policy docker-ce
    

    The response:

    docker-ce:
      Installed: (none)
      Candidate: 17.09.0~ce-0~ubuntu
      Version table:
      17.09.0~ce-0~ubuntu 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
      17.06.2~ce-0~ubuntu 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
      17.06.1~ce-0~ubuntu 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
      17.06.0~ce-0~ubuntu 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
      17.03.2~ce-0~ubuntu-xenial 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
      17.03.1~ce-0~ubuntu-xenial 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
      17.03.0~ce-0~ubuntu-xenial 500
         500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
    
  6. Install Docker Community Edition:

    sudo apt-get install -y docker-ce
    

    Sample response:

    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      aufs-tools cgroupfs-mount libltdl7
    Suggested packages:
      mountall
    The following NEW packages will be installed:
      aufs-tools cgroupfs-mount docker-ce libltdl7
    0 upgraded, 4 newly installed, 0 to remove and 17 not upgraded.
    Need to get 21.2 MB of archives.
    After this operation, 100 MB of additional disk space will be used.
    Get:1 http://us-central1.gce.archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
    Get:2 http://us-central1.gce.archive.ubuntu.com/ubuntu xenial/universe amd64 cgroupfs-mount all 1.2 [4,970 B]
    Get:3 http://us-central1.gce.archive.ubuntu.com/ubuntu xenial/main amd64 libltdl7 amd64 2.4.6-0.1 [38.3 kB]
    Get:4 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 17.09.0~ce-0~ubuntu [21.0 MB]
    Fetched 21.2 MB in 0s (22.7 MB/s)     
    Selecting previously unselected package aufs-tools.
    (Reading database ... 66551 files and directories currently installed.)
    Preparing to unpack .../aufs-tools_1%3a3.2+20130722-1.1ubuntu1_amd64.deb ...
    Unpacking aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
    Selecting previously unselected package cgroupfs-mount.
    Preparing to unpack .../cgroupfs-mount_1.2_all.deb ...
    Unpacking cgroupfs-mount (1.2) ...
    Selecting previously unselected package libltdl7:amd64.
    Preparing to unpack .../libltdl7_2.4.6-0.1_amd64.deb ...
    Unpacking libltdl7:amd64 (2.4.6-0.1) ...
    Selecting previously unselected package docker-ce.
    Preparing to unpack .../docker-ce_17.09.0~ce-0~ubuntu_amd64.deb ...
    Unpacking docker-ce (17.09.0~ce-0~ubuntu) ...
    Processing triggers for libc-bin (2.23-0ubuntu9) ...
    Processing triggers for man-db (2.7.5-1) ...
    Processing triggers for ureadahead (0.100.0-19) ...
    Processing triggers for systemd (229-4ubuntu20) ...
    Setting up aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
    Setting up cgroupfs-mount (1.2) ...
    Setting up libltdl7:amd64 (2.4.6-0.1) ...
    Setting up docker-ce (17.09.0~ce-0~ubuntu) ...
    Processing triggers for libc-bin (2.23-0ubuntu9) ...
    Processing triggers for systemd (229-4ubuntu20) ...
    Processing triggers for ureadahead (0.100.0-19) ...
    
  7. List Docker container status:

    sudo systemctl status docker
    

    The response:

    ● docker.service - Docker Application Container Engine
    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
    Active: active (running) since Sat 2017-11-04 22:00:35 UTC; 1min 28s ago
      Docs: https://docs.docker.com
     Main PID: 13524 (dockerd)
    CGroup: /system.slice/docker.service
            ├─13524 /usr/bin/dockerd -H fd://
            └─13544 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout
    Nov 04 22:00:34 vault-1 dockerd[13524]: time="2017-11-04T22:00:34.552925012Z" level=warning msg="Your kernel does not support swap me
    Nov 04 22:00:34 vault-1 dockerd[13524]: time="2017-11-04T22:00:34.553123462Z" level=warning msg="Your kernel does not support cgroup 
    Nov 04 22:00:34 vault-1 dockerd[13524]: time="2017-11-04T22:00:34.553267498Z" level=warning msg="Your kernel does not support cgroup 
    Nov 04 22:00:34 vault-1 dockerd[13524]: time="2017-11-04T22:00:34.554662024Z" level=info msg="Loading containers: start."
    Nov 04 22:00:34 vault-1 dockerd[13524]: time="2017-11-04T22:00:34.973517284Z" level=info msg="Default bridge (docker0) is assigned wi
    Nov 04 22:00:35 vault-1 dockerd[13524]: time="2017-11-04T22:00:35.019418706Z" level=info msg="Loading containers: done."
    Nov 04 22:00:35 vault-1 dockerd[13524]: time="2017-11-04T22:00:35.029599857Z" level=info msg="Docker daemon" commit=afdb6d4 graphdriv
    Nov 04 22:00:35 vault-1 dockerd[13524]: time="2017-11-04T22:00:35.029962340Z" level=info msg="Daemon has completed initialization"
    Nov 04 22:00:35 vault-1 systemd[1]: Started Docker Application Container Engine.
    Nov 04 22:00:35 vault-1 dockerd[13524]: time="2017-11-04T22:00:35.054191848Z" level=info msg="API listen on /var/run/docker.sock"
    log files:
    
  8. Verify Docker version in case you need to troubleshoot:

    docker --version
    

    The response at time of writing:

    Docker version 20.10.11, build dea9396
    
  9. Start the Docker daemon

  10. Download the Docker image maintained by HashiCorp at https://hub.docker.com/_/vault

    docker pull vault 
    

    NOTE: If you see “Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?” start the Docker daemon, then try again.

    Alternate Docker images

    https://hub.docker.com/r/sjourdan/vault has HashiCorp Vault on a minimal Alpine Linux box

    https://hub.docker.com/r/kintoandar/hashicorp-vault has HashiCorp Vault on a tiny busybox

  11. Set environment variables so IP addresses used for the redirect and cluster addresses in Vault’s configuration is the address of the named interface inside the container (e.g. eth0):

    VAULT_REDIRECT_INTERFACE 
    VAULT_CLUSTER_INTERFACE 
    
  12. Run the image using the file storage backend at path /vault/file, with a default secret lease duration of one week and a maximum of (720h/24) 30 days:

    docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server
    

    --cap-add=IPC_LOCK: locks memory, which prevents it from being swapped to disk (and thus exposing keys).

    See https://www.vaultproject.io/docs/config/index.html

    NOTE: At startup, the server reads .hcl and .json configuration files from the /vault/config folder. Information passed into VAULT_LOCAL_CONFIG is written into local.json in this directory and read as part of reading the directory for configuration files.

  13. Start consul container with web ui on default port 8500:

    docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp \
     --hostname consul \
     --name consul progrium/consul \
     -server -bootstrap -ui-dir /ui
    

Binary install

Vault is open-sourced at https://github.com/hashicorp/vault with a marketing home page at https://vaultproject.io.

  1. HashiCorp’s steps for installing Vault are at https://vaultproject.io/docs/install.

  2. Installers for a large number of operating systems are downloaded from HashiCorp’s website:

    https://www.vaultproject.io/downloads.html

    • vault_0.7.3_darwin_amd64.zip for Mac 64 expands to a vault app of 59.6 MB.

  3. Verify the SHA256 hash to ensure that not a single bit was lost during download.

  4. On a Mac, drag and drop the Vault.app file to your root Applications folder.

  5. Set the PATH to Vault.

  6. Double-click on the vault app.

    If you get an error that the binary could not be found, then your PATH environment variable was not setup properly.

    This automated script should install vault at version 0.1.2 into folder:

    /opt/vault_0.1.2

    (the current version for you will likely be different that 0.1.2).

    The installer configures itself by default to listen on localhost port 8200, and registers it as a service called vault-server.

To uninstall, move that folder to trash.

NOTE: Also found vault in chefdk/embedded/lib/ruby/gems/2.5.0/gems/train-1.5.6/lib/train/transports/clients/azure/vault.rb

Verify install

No matter how it was installed:

  1. Check Vault seal status:

    https://127.0.0.1:8200/v1/sys/seal-status
  2. Open a new Terminal window to Verify:

    vault status
    

    If the Vault service is not running, you’ll see:

    Error checking seal status: Get "https://127.0.0.1:8200/v1/sys/seal-status": dial tcp 127.0.0.1:8200: connect: connection refused

    The expected response:

    Key               Value
    ---                    -----
    Recovery Seal Type     shamir
    Initialized            true
    Sealed                 false
    Total Recovery Shares  5
    Threshold              3
    Version                1.0.2
    ...
    

    Journaling

    Show that secrets are not displayed when using Azure Keyvault:

    sudo journalctl -u 

    Start Dev Server

  3. Start the Dev Server per https://www.vaultproject.io/intro/getting-started/dev-server.html

    vault server -dev
    

    PROTIP: This is the command put in a server start-up script.

    Alternately, specific a configuration file in the current folder:

    vault server -config=config-file.hcl
    

    Sample response:

    WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
    and starts unsealed with a single unseal key. The root token is already
    authenticated to the CLI, so you can immediately begin using Vault.
     
    You may need to set the following environment variable:
     
     $ export VAULT_ADDR='http://127.0.0.1:8200'
     
    The unseal key and root token are displayed below in case you want to
    seal/unseal the Vault or re-authenticate.
     
    Unseal Key: qvAfCZEkFHS1dYYba8adz5wXHSQe1I9LjoHUbxCrEo4=
    Root Token: s.dqqznrQAJNiLrU9mX3eT8q2p
     
    Development mode should NOT be used in production installations!
    
  4. In a browser, open the web page URL:

    http://127.0.0.1:8200/vault/init

    If the server has not been unsealed (see below), the expected response is JSON: errors: []

    Restart Vault on Linux

  5. Restart Vault (provide password):

    sudo systemctl restart vault
    

Unsealing

When a Vault server is started, it starts in a sealed state.

No operations are possible with a Vault that is sealed.

Unsealing is the process of constructing the master key needed to read encryption key to encrypt data and decryption key used to decrypt data.

PROTIP: Decryption keys are stored with data, in a form encrypted by a master key.

[3:36] Vault splits the master key into 5 to 10 chars for that many different trusted people to see a different portion. This is so that all those same people would provide their portion when the master is needed again. CAUTION: The master key should not be stored anywhere but in memory.

Alternately, sealing can be done by auto-unseal by using a cloud key from Azure Key Vault, such as this example stanza:

seal "azurekeyvault" {
      tenant_id     = "12345678-1234-1234-1234-1234567890"
      client_id     = "12345678-1234-1234-1234-1234567890"
      client_secret = "DDOU..."
      vault_name    = "hc-vault"
      key_name      = "vault_key"
   }
   
./vault_ unseal af29615803fc23334c3a93f8ad58353b587f50eb0399d23a6950721cbae94948
   

The response confirms:

Sealed: false
Key Shares: 1
Key Threshold: 1
Unseal Progress: 0
   

Shamir refers to the Shamir secret sharing algorithm defined at: https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing

Higher Key Threshold values would require more key holders to perform unseal with their parts of the key. This provides an additional level of security for accessing data.

Re-Sealing Vault server

In case of an emergency, such as:

  • If a secret stored in Vault is leaked - a new secret should be generated and replaced in Vault, with a key rotation following.

  • If vault user credentials are leaked - the user credentials should be revoked and a key rotation should be performed.

  • If vault unseal keys are leaked - a rekey should be performed.

To prevent any actions or requests to be performed against the Vault server, it should be sealed immediately.

To seal the Vault server:

vault seal
   

This buys time to investigate the cause of the issue and to find an appropriate solution.


Vault Principals

Generate principal

  1. Using Azure to obtain a lease:

    vault read azure/cred/reader-role

    Notice “lease_duration”.

    The lease can be renewed by running the command again.

    PROTIP: This should be in a script that incorporates other revocations when someone leaves an organization.

Revoke a lease

To revoke a lease on Azure:

vault lease revoke -prefix azure/creds/reader-role

Vault on AWS

There are several options for hosting Vault within the Amazon cloud.

https://aws.amazon.com/quickstart/architecture/vault describes “A unified interface to manage and encrypt secrets on the AWS Cloud”.

hashicorp-vault-on-aws-architecture c47a3bf846dc964bb4464471a764b26f1b0d9639

Install Vault within AWS EKS cluster

HashiCorp announced a Helm chart to setup Vault in Kubernetes

https://www.vaultproject.io/docs/platform/k8s/helm

https://github.com/hashicorp/vault-helm

/Users/wilson_mar/Library/Python/3.7/lib/python/site-packages/ansible/modules/cloud/amazon To the templates we would need to add monitoring/Observability, SIEM, etc.

There is a Consul provider helm chart

Authorization

To continue working with Vault:

  1. Identify yourself by providing the initial root token using the auth command, such as:

    ./vault_ auth 98df443c-65ee-d843-7f4b-9af8c426128a
    

    The expected successful response:

    Successfully authenticated! The policies that are associated
    with this token are listed below:
        
    root
    

    The Access Control policy named “root” policy gives “superuser” level access to everything in Vault.

    As we plan to store secrets for multiple projects, we should be able to clearly separate access to secrets that belong to different projects. And this is where policies do their job.

    Policies in Vault are formatted with HCL, a human-readable configuration format. It is also JSON-compatible. An example policy is shown below:

    path "secret/project/name" {
      policy = "read"
    }
    

Jenkins plug-in

https://github.com/jenkinsci/hashicorp-vault-plugin is a Jenkins plug-in to establish a build wrapper to populate environment variables from secrets stored in HashiCorp’s Vault. It uses the “App Role” authentication backend which HashiCorp explicitly recommends for machine-to-machine authentication.

The plug-in allows use of a GitHub/GitLab personal access token Github Access Token (https://github.com/blog/1509-personal-api-tokens)

Alternately, a Vault Token - either configured directly in Jenkins or read from an arbitrary file on the Jenkins Machine.

An example in Java is with Java

??? Vault Token Credential, just that the token is read from a file on your Jenkins Machine. You can use this in combination with a script that periodically refreshes your token.

See https://github.com/amarruedo/hashicorp-vault-jenkins

### GitHub Token


   vault auth -method=github token=GITHUB_ACCESS_TOKEN
   

Upon success, a Vault token will be stored at $HOME/.vault-token.

vault list secret/path/to/bucket
   

This uses the token at $HOME/.vault-token if it exists.

See http://chairnerd.seatgeek.com/practical-vault-usage/

https://www.vaultproject.io/intro/getting-started/deploy.html

Handling secrets in CLI

  1. As a demonstration, store the secret value “Pa$$word321” named “donttell”:

    vault write secret/donttell value=Pa$$word321 excited=yes
    

    REMEMBER: secret/ prefix to a secret value is necessary, and without double-quotes.

    Because commands are stored in shell history, it’s preferred to use files when handling secrets.

  2. Retrieve the secret just added:

    vault read secrets/apps/web/username
    vault read secrets/apps/portal/username
    vault read secrets/common/api_key
    vault read secret/donttell
    

    The response, for example:

    Key                 Value
    ---                 -----
    refresh_interval    768h0m0s
    excited             yes
    value               Pa$$word321
    
  3. Output a secret into a JSON file:

    vault read -format=json secret/donttel
    
    {
     "request_id": "68315073-6658-e3ff-2da7-67939fb91bbd",
     "lease_id": "",
     "lease_duration": 2764800,
     "renewable": false,
     "data": {
         "excited": "yes",
         "value": "Pa$$word321"
     },
     "warnings": null
    }   
  4. Delete a secret:

    vault delete secret/donttel
    
    Success! Deleted 'secret/donttel' if it existed.
    

Rekey

Vault’s rekey command allows for the recreation of unseal keys as well as changing the number of key shares and key threshold. This is useful for adding or removing Vault admins.

Vault’s rotate command is used to change the encryption key used by Vault. This does not require anything other than a root token. Vault will continue to stay online and responsive during a rotate operation.

Store and access secrets within a program

Use libraries for:

  • Python
  • C#
  • Java
  • Node JavaScript
  • Golang

Several Vault clients have been written.

Vault

https://holdmybeersecurity.com/2020/11/24/integrating-vault-secrets-into-jupyter-notebooks-for-incident-response-and-threat-hunting/

Vault CLI Katakoda hands-on lab

The hands-on Katakoda lab Store Secrets using HashiCorp Vault makes use of a vault.hcl file:

backend "consul" {
  address = "consul:8500"
  advertise_addr = "consul:8300"
  scheme = "http"
}
listener "tcp" {
  address = "0.0.0.0:8200"
  tls_disable = 1
}
disable_mlock = true

It specifies Consul as the backend to store secrets. Consul runs in HA mode. scheme = “http” should be set to scheme = “https” (use TLS) in production. 0.0.0.0 binds Vault to listen on all IP addresses.

  1. The vault.hcl file is processed by:

    docker create -v /config --name config busybox; docker cp vault.hcl config:/config/;bc973810b4bb77788b37d269b669ba9559a001c5dab7da557c887f7de024d2f0
  2. Launch a single Consul agent:

    docker run -d --name consul \
      -p 8500:8500 \
      consul:v0.6.4 \
      agent -dev -client=0.0.0.0 \
      9b21b47f350931081232d4730341c1221bc086d5bb581bdf06992a334a0c51bf
    

    In production, we’d want to have a cluster of 3 or 5 agents as a single node can lead to data loss.

  3. Launch a single vault-dev container:

    docker run -d --name vault-dev \
    --link consul:consul \
    -p 8200:8200 \
    --volumes-from config \
    cgswong/vault:0.5.3 server -config=/config/vault.hcl
    71f518bb3165313a1e8e8d809e44b0a251dd7c138c5f045d634bae34439d1af7
    

    PROTIP: Volumes are used to hold data.

  4. Create an alias “vault” to proxy commands to vault to the Docker container.

    alias vault='docker exec -it vault-dev vault "$@"'
    export VAULT_ADDR=http://127.0.0.1:8200
    
  5. Initialise the vault so keys go into file keys.txt:

    vault init -address=${VAULT_ADDR} > keys.txt
    cat keys.txt
    

Golang

https://github.com/Omar-Khawaja/vault-example/blob/master/main.go

package main
 
import (
	"fmt"
	"github.com/hashicorp/vault/api"
	"os"
)
 
var token = os.Getenv("TOKEN")
var vault_addr = os.Getenv("VAULT_ADDR")
 
func main() {
	config := &api.Config{
		Address: vault_addr,
	}
	client, err := api.NewClient(config)
	if err != nil {
		fmt.Println(err)
		return
	}
	client.SetToken(token)
	c := client.Logical()
	secret, err := c.Read("secret/data/foo")
	if err != nil {
		fmt.Println(err)
		return
	}
	m := secret.Data["data"].(map[string]interface{})
	fmt.Println(m["hello"])
}
   

CA for SSH

Vault can serve as a Root or Intermediate Certificate Authority.

References

“How to Install Vault” on CodeMentor

github.com/dandb/jenkins-provining

VIDEO by Damien Roche at Dun & Bradstreet on 30 April 2017

Kubernetes: Up & Integrated — Secrets & Configuration by Tristan Colgate-McFarlane vault-qubit-895x759-56525

https://www.joyent.com/blog/secrets-management-in-the-autopilotpattern Vault provides encryption at rest for secrets, encrypted communication of those secrets to clients, and role-based access control and auditability for secrets. And it does so while allowing for high-availability configuration with a straightforward single-binary deployment. See the Vault documentation for details on their security and threat model. – See https://www.vaultproject.io/docs/internals/security.html

Vault uses Shamir’s Secret Sharing to control access to the “first secret” that we use as the root of all other secrets. A master key is generated automatically and broken into multiple shards. A configurable threshold of k shards is required to unseal a Vault with n shards in total.

namic Credentials and Encryption as a data service, and “Policy as Code” vs “Secrets as Code.”

VIDEO COURSE: Getting Started with HashiCorp Vault by Bryan Krausen (@btkrausen)

Database

https://play.instruqt.com/hashicorp/tracks/vault-dynamic-database-credentials

https://www.vaultproject.io/docs/secrets/databases/

https://www.vaultproject.io/docs/secrets/databases/mysql-maria/

  1. See whether “Type” of secrets engines “database” are enabled:

    vault secrets list

  2. Enable the Database secrets engine on the Vault server.

    vault secrets enable -path=lob_a/workshop/database database

    The expected response include “Success! Enabled the database secrets engine at: lob_a/workshop/database/

    Vault’s Database secrets engine dynamically generates credentials (username and password) for many databases.

    Configure the database secrets engine you enabled (above) on the path lob_a/workshop/database to work with the local instance of the MySQL database. Use a specific path rather than the default “database” to illustrate that multiple instances of the database secrets engine could be configured for different lines of business that might each have multiple databases.

  3. Configure the Database Secrets Engine on the Vault server.

    All secrets engines must be configured before they can be used.

    We first need to configure the database secrets engine to use the MySQL database plugin and valid connection information. We are configuring a database connection called “wsmysqldatabase” that is allowed to use two roles that we will create below.

    vault write lob_a/workshop/database/config/wsmysqldatabase \
      plugin_name=mysql-database-plugin \
      connection_url=":@tcp(localhost:3306)/" \
      allowed_roles="workshop-app","workshop-app-long" \
      username="hashicorp" \
      password="Password123"
    

    This will not return anything if successful.

    Note that the username and password are templated in the “connection_url” string, getting their values from the “username” and “password” fields. We do this so that reading the path “lob_a/workshop/database/config/wsmysqldatabase” will not show them.

    To test this, try running this command:

    vault read lob_a/workshop/database/config/wsmysqldatabase
    
    ey                                   Value
    ---                                   -----
    allowed_roles                         [workshop-app workshop-app-long]
    connection_details                    map[connection_url::@tcp(localhost:3306)/ username:hashicorp]
    plugin_name                           mysql-database-plugin
    root_credentials_rotate_statements    []
    

    You will not see the username and password.

    We used the initial MySQL username “hashicorp” and password “Password123” above. Validate that you can login to the MySQL server with this command:

    mysql -u hashicorp -pPassword123
    

    You should be given a mysql> prompt.

    Logout of the MySQL server by typing \q at the mysql> prompt. This should return you to the root@vault-mysql-server:~# prompt.

    We can make the configuration of the database secrets engine even more secure by rotating the root credentials (actually just the password) that we passed into the configuration. We do this by running this command:

    vault write -force lob_a/workshop/database/rotate-root/wsmysqldatabase
    

    This should return “Success! Data written to: lob_a/workshop/database/rotate-root/wsmysqldatabase”.

    Now, if you try to login to the MySQL server with the same command given above, it should fail and give you the message “ERROR 1045 (28000): Access denied for user ‘hashicorp’@’localhost’ (using password: YES)”. Please verify that:

    mysql -u hashicorp -pPassword123
    

    Note: You should not use the actual root user of the MySQL database (despite the reference to “root credentials”); instead, create a separate user with sufficient privileges to create users and to change its own password.

    Now, you should create the first of the two roles we will be using, “workshop-app-long”, which generates credentials with an initial lease of 1 hour that can be renewed for up to 24 hours.

    vault write lob_a/workshop/database/roles/workshop-app-long \
      db_name=wsmysqldatabase \
      creation_statements="CREATE USER ''@'%' IDENTIFIED BY '';GRANT ALL ON my_app.* TO ''@'%';" \
      default_ttl="1h" \
      max_ttl="24h"
    

    This should return “Success! Data written to: lob_a/workshop/database/roles/workshop-app-long”.

    And then create the second role, “workshop-app” which has shorter default and max leases of 3 minutes and 6 minutes. (These are intentionally set long enough so that you can use the credentials generated for the role to connect to the database but also see them expire in the next challenge.)

    vault write lob_a/workshop/database/roles/workshop-app \
      db_name=wsmysqldatabase \
      creation_statements="CREATE USER ''@'%' IDENTIFIED BY '';GRANT ALL ON my_app.* TO ''@'%';" \
      default_ttl="3m" \
      max_ttl="6m"
    

    This should return “Success! Data written to: lob_a/workshop/database/roles/workshop-app”.

    The database secrets engine is now configured to talk to the MySQL server and is allowed to create users with two different roles. In the next challenge, you’ll generate credentials (username and password) for these roles.

  • Generate and use dynamic database credentials for the MySQL database.

  • Renew and revoke database credentials for the MySQL database.

https://www.vaultproject.io/docs/secrets/databases/mysql-maria/ https://www.vaultproject.io/docs/secrets/databases/#usage https://www.vaultproject.io/api/secret/databases/#generate-credentials

Generate dynamic credentials for a MySQL database from Vault.

https://play.instruqt.com/hashicorp/tracks/vault-dynamic-database-credentials


Alternative: Environment variables

https://www.youtube.com/watch?v=IolxqkL7cD8 Hiding passwords in environment variables on Windows

import os
 
db_user = os.environ.get('DB_USER')
db_password = os.environ.get('DB_PASS')
 
print(db_user)
print(db_password)

Alternative: Google Secret Manager

https://cloud.google.com/community/tutorials/secrets-manager-python

https://cloud.google.com/secret-manager/docs

Alternative: JupyterLab Credential Store

https://towardsdatascience.com/the-jupyterlab-credential-store-9cc3a0b9356

Alternative: python-dotenv

Vicki Boykis blogged about the alternatives, which includes this for Jupyter notebook coders:

%load_ext dotenv
%dotenv
import os
os.environ.get("API_TOKEN")

“dotenv” is from python-dotenv at
https://github.com/theskumar/python-dotenv

It retrieves an .env file created to define your project’s secret environment variables, using the package’s command line tool) at
https://github.com/theskumar/python-dotenv#command-line-interface

That .env file name is specified in the .gitignore so it is ignored when pushing to github. But files already in GitHub remains visible.


Chrome/ Browser Extension

PROTIP: Here is a tool to test access to a Vault instance (locally and publicly)as well.

VaultPass Chrome/Firefox Browser Extension (installed from GitHub) explained at VIDEO: “Enabling Teams to Share Secrets Confidentially”

(rather than using Git, which exposes all teams/users having access to all secrets and each password rotate taking up more space in history. Use of GPG is cumbersome)

VaultPass Options</a>

In “Developer-First Application Security and DevSecOps” by Kevin Alwell (@alwell-kevin at GitHub)


Learning Resources

https://learn.hashicorp.com/vault

On HashiCorp’s YouTube channel:

Katacode’s “Store Secrets using HashiCorp Vault” provides a web-based interactive bash terminal.

ACloudGuru.com’s HashiCorp Vault 18 hour video course by Ermin Kreponic (a resident of Sarajevo).

At Oreilly.com, “Getting Started with HashiCorp Vault” December 2019 by Bryan Krausen (of Skylines academy, HashiTimes newsletter, and BOOK: “Running HashiCorp Vault in Production Paperback” with Dan McTeer

https://www.vaultproject.io/docs/internals/security/ Security Model

https://www.youtube.com/watch?v=5-RMu9M_Anc How to Integrate HashiCorp Vault With Jenkins CloudBeesTV


More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options
  18. AWS Load Balancers

  19. Cloud services comparisons (across vendors)
  20. Cloud regions (across vendors)
  21. AWS Virtual Private Cloud

  22. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  23. Azure Certifications
  24. Azure Cloud

  25. Azure Cloud Powershell
  26. Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
  27. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  28. Azure Networking
  29. Azure Storage
  30. Azure Compute
  31. Azure Monitoring

  32. Digital Ocean
  33. Cloud Foundry

  34. Packer automation to build Vagrant images
  35. Terraform multi-cloud provisioning automation
  36. Hashicorp Vault and Consul to generate and hold secrets

  37. Powershell Ecosystem
  38. Powershell on MacOS
  39. Powershell Desired System Configuration

  40. Jenkins Server Setup
  41. Jenkins Plug-ins
  42. Jenkins Freestyle jobs
  43. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  44. Docker (Glossary, Ecosystem, Certification)
  45. Make Makefile for Docker
  46. Docker Setup and run Bash shell script
  47. Bash coding
  48. Docker Setup
  49. Dockerize apps
  50. Docker Registry

  51. Maven on MacOSX

  52. Ansible

  53. MySQL Setup

  54. SonarQube & SonarSource static code scan

  55. API Management Microsoft
  56. API Management Amazon

  57. Scenarios for load
  58. Chaos Engineering

More on Security

This is one of a series on Security:

  1. SOC2

  2. Git Signing
  3. Hashicorp Vault

  4. WebGoat known insecure PHP app and vulnerability scanners
  5. Test for OWASP using ZAP on the Broken Web App

  6. Encrypt all the things

  7. AWS Security (certification exam)
  8. AWS IAM (Identity and Access Management)

  9. Cyber Security
  10. Security certifications