Wilson Mar bio photo

Wilson Mar


Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

This sample Bash script contains multiple features: install, configure, and run (then remove) a web app within Docker on macOS and Linux, with one copy/paste

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean


This article describes a Bash script that, with a single command can do all this:

  1. Define initial lines to:
    • First line file type “shebang”
    • Define bash ShellCheck rules that need to be disabled.
    • Echo time, name, version metadata about run
    • Metadata about the script in comments.
    • Capture a time stamp to later calculate how long the script runs.
  2. Display a menu if no parameter is specified in the command line
  3. Define variables for use as “feature flags” to control specific features run.
  4. Set variables associated with each parameter flag.
  5. Define custom functions to echo text to screen
  6. Detect the operating system in use to install the install appropriate to the OS.
  7. Upgrade to the latest version of bash
  8. Set Bash traps to display information if script is interrupted.
  9. Print run Operating environment information and set “Strict Mode” based on parameters specified for the run.
  10. Install installers (XCode, HomeBrew, apt-get), depending on operating system
  11. Define shell utility functions, such as ShellCheck and the function to kill process by name, etc.
  12. Install basic utilities: Git, jq

    Run configuration:

  13. Get secrets (and other run-time variables) from a clear-text file in $HOME folder or from a crypto program.
  14. Configure project folder location where files are created during the run.
  15. Obtain repository from GitHub.
  16. Reveal secrets stored within .gitsecret folder within repo from GitHub (after installing gnupg and git-secret)
  17. Pipenv and Pyenv to install Python and its modules.

    Connect to cloud (to get secrets):

  18. Connect to Google Comput Cloud (GCP), if requested, to get secrets
  19. Connect to AWS
  20. Connect to Azure


  21. Install K8S minikube
  22. Install EKS using eksctl
  23. Read secrets from a configuration file in clear text, encrypted file, Vault API using govaultenv
  24. Use CircleCI
  25. Use Yubikey
  26. Use Hashicorp Vault
  27. Use NodeJs
  28. Run Virtualenv
  29. Configure Pyenv with virtualenv
  30. Use Anaconda
  31. Use GoLang
  32. Use Python
  33. Use Tensorflow
  34. Use Ruby
  35. Setup Eggplant

  36. Use Docker
  37. Run within Docker
  38. Update GitHub


  39. -C to remove GitHub folder after run
  40. -K to Kill processes after run (to save CPU)
  41. -D to Delete containers and other files after run (to save disk space)
  42. -M to remove Docker iMages downloaded from DockerHub (to save disk space)

Each of the above are preceded by “###” comment tags in the script.

I’ve refined the script over the years to be a “Swiss Army Knife” that enables me to very quickly get stuff done. So it contains most of the coding tricks one would need to use. The script use includes all the above features for apps in NodeJs, Ruby, and Python (Anacodna and Tensorflow, and a program cloned from GitHub) so that we can avoid some of the toil and human error of manually typing commands on each new instance.

If this is too much for you, just cut out the features you don’t want, and enjoy the rest.

Copy and paste invocation for menu

  1. Open a Terminal on your Mac or instantiate a Linux machine on VMWare, EC2, or other cloud.

  2. Execute the script just to get a short description of the parameters controlling what features are invoked, copy this command into your Clipboard by triple-clicking “bash” to turn this command line gray, then press command+C to copy:

    bash -c "$(curl -fsSL https://raw.githubusercontent.com/wilsonmar/DevSecOps/master/bash/sample.sh)"

    =========================== 2020-06-28T10:26:41-0600-347 ./sample.sh v0.72
    -E            continue (NOT stop) on error
    -v            run -verbose (list space use and each image to console)
    -q           -quiet headings for each step
    -x            set -x to trace command lines
    -I           -Install jq, brew, docker, docker-compose, etc.
    -U           -Upgrade installed packages
    -s           -secrets retrieve
    -S "~/.alt.secrets.sh"  -Secrets full file path
    -H           install/use -Hashicorp Vault secret manager
    -m           Setup Vault SSH CA cert
    -L           use CircleCI
    -aws         -AWS cloud
    -eks         -eks (Elastic Kubernetes Service) in AWS cloud
    -g "abcdef...89" -gcloud API credentials for calls
    -p "cp100"   -project in cloud
    -d           -delete GitHub and pyenv from previous run
    -c           -clone from GitHub
    -N           -Name of GitHub Repo folder
    -n "John Doe"            GitHub user -name
    -e "john_doe@gmail.com"  GitHub user -email
    -k           -k install and use Docker
    -k8s         -k8s (Kubernetes) minikube
    -b           -build Docker image
    -dc           use docker-compose.yml file
    -w           -write image to DockerHub
    -r           -restart (Docker) before run
    -py          run with Pyenv
    -V           to run within VirtualEnv (pipenv is default)
    -tf          -tensorflow
    -A           run with Python -Anaconda
    -y            install Python Flask
    -i           -install Ruby and Refinery
    -j            install -JavaScript (NodeJs) app with MongoDB
    -G           -GitHub is the basis for program to run
    -F "abc"     -Folder inside repo
    -f "a9y.py"  -file (program) to run
    -P "-v -x"   -Parameters controlling program called
    -u           -update GitHub
    -a           -actually run server (not dry run)
    -t           setup -test server to run tests
    -o           -open/view app or web page in default browser
    -K           stop OS processes at end of run (to save CPU)
    -D           -Delete files after run (to save disk space)
    -C           remove -Cloned files after run (to save disk space)
    -M           remove Docker iMages pulled from DockerHub
    USAGE EXAMPLE during testing:
    ./sample.sh -v -W -r -k -a -o -K -D  # WebGoat Docker with Contrast agent
    ./sample.sh -v -s -eggplant -k -a -K -D  # eggplant use docker-compose of selenium-hub images
    ./sample.sh -v -S "$HOME/.mck-secrets.sh" -eks -D
    ./sample.sh -v -S "$HOME/.mck-secrets.sh" -H -m -t    # Use SSH-CA certs with -H Hashicorp Vault -test actual server
    ./sample.sh -v -g "abcdef...89" -p "cp100-1094"  # Google API call
    ./sample.sh -v -n -a  # NodeJs app with MongoDB
    ./sample.sh -v -i -o  # Ruby app
    ./sample.sh -v -I -U -c -s -y -r -a -AWS   # Python Flask web app in Docker
    ./sample.sh -v -I -U    -s -H    -t        # Initiate Vault test server
    ./sample.sh -v          -s -H              #      Run Vault test program
    ./sample.sh -q          -s -H    -a        # Initiate Vault prod server
    ./sample.sh -v -I -U -c    -H -G -N "python-samples" -f "a9y-sample.py" -P "-v" -t -AWS -C  # Python sample app using Vault
    ./sample.sh -v -V -c -T -F "section_2" -f "2-1.ipynb" -K  # Jupyter anaconda Tensorflow in Venv
    ./sample.sh -v -V -c -L -s    # Use CircLeci based on secrets
    ./sample.sh -v -D -M -C
    ./sample.sh -G -v -f "challenge.py" -P "-v"  # to run a program in python-samples
    ./sample.sh -v -s -H -m -o -t  # Vault SSH keygen

    Common Run Parameters

  3. Change what each run of the script does by changing parameters invoking the command, such as -v -I -U -c -s -r -a -o

    bash -c "$(curl -fsSL https://raw.githubusercontent.com/wilsonmar/DevSecOps/master/bash/sample.sh)" -v -I -U -c -s -r -a -o

    The script ends with a message like this:

    ✔ End of script after 1883 seconds and 677960 bytes of disk space.

    -v for -verbosity adds additional notes.

    -vv for debugging verbosity such as a display example log types.

    -q for -quiet suppression of headers and footers that appear by default, such as when running in production mode.

    -t for -testing mode, which runs local Vault and app servers.

    -I runs installers, but installs each only if it is not already installed.

    -I and -U updates installers even though each is installed. Some installers are invoked only if the feature is also specified. But Homebrew and git are updated if no other utilities are specified.

    -o -opens the sample app in your default browser.

The rest of this article describes coding tricks used and how you might customize the script.

Edit sample.sh

  1. Use a text editor or IDE to open the sample.sh file.

    Indent 3 spaces

    It’s an asthetic choice.

    Google’s Style Guide calls for two spaces.

    But three spaces make the line indent under if align better. And the if statement is the most common in the script.

    First line Shebang and comments

  2. Look at the first line.

    Unlike the Windows operating system, which decides what program is used to open a file based on the file name “extension” behind the dot, Linux systems ignores the file name and looks into the file to see the first line.

    # is a comment in Bash scripts.

    #! is called the “Shebang”.

    There are several options for a shebang.

    The “Bourne-compliant” shebang for the Bash v3.2 shell installed in folder /bin by default on MacOS up to High Sierra. Thus:


    / means the folder is from the root level, above where the operating system stores home files for specific users.

    However, Bash v4 is installed (for parallel operation) in another folder:


    This blog describes what is improved by version 4, such as “associative arrays”.

    PROTIP: The recommended shebag now is to use the “env” program to select the appropriate version:

    #!/usr/bin/env bash

    /usr/bin is the folder that holds the executable program env.

    /env is the name of the program that obtains the appropriate shell based on the next nugget (“bash”).

    bash is the interpreter. (python is used in Python scripts.)

    Disable Shellcheck Linting Rules

  3. This comment line disables (excludes) ShellCheck linter check SC2034 in the file:

    shellcheck disable=SC2001 # See if you can use ${variable//search/replace} instead.

    After ShellCheck version 0.4.6, the line can be added anywhere for the next line in the script.

    Alternately, in the script define the code associated with each rule to -exclude from checking:

    export SHELLCHECK_OPTS="-e SC2001 -e SC2059 -e SC2034 -e SC1090"

    Also, the entire script can be copied and pasted online for checking at shellcheck.net, but that can be a security violation. So we install it for local running:

    Install ShellCheck from https://github.com/koalaman/shellcheck

    bash install shellcheck
  4. This script runs ShellCheck to lint itself.

    shellcheck sample.sh

    No response text is issued if no errors were found.

    PROTIP: If ShellCheck finds an issue, the script stops.

    File metadata

    Metadata about the file, such as

    Clear screen echo

  5. To show responses at the top of the terminal, delete the comment # to enable clear # screen (but not history).

    However, a lot of output would scroll past, so it is rather useless. Better to print a long string as a visual marker to differentiate between different runs:

    echo "========================= $SCRIPT_VERSION"

    The SCRIPT_VERSION is shown at the beginning and the end to detect whether a cached version of the script was used. That happens.

    Time end - start = elapsed

    To determine elapsed time, START time stamps are captured as soon as the script starts.

    When the script ends, END time stamps are captured to calculate the elapsed time.

    There are two time stamp formats.

    EPOCH_START="$(date -u +%s)"  # such as 1572634619

    captures the number of minutes since the Jan 1, 1970 epoch point in time.

    LOG_DATETIME=$(date +%Y-%m-%dT%H:%M:%S%z)-$((1 + RANDOM % 1000))

    captures the date in a human-readable year-month-day-hour-minutes “ISO 8601” format which also includes the hours and minutes from UTC/GMT, such as “-600” (for US Central Time) in this sample output:


    An additional RANDOM number is added to ensure uniqueness among several instances running.

    PROTIP: Values stored in variables during a run do not persist.

    The number of seconds is rounded DOWN, so a run that takes less than a second is measured as 0 seconds.


    Arguments into script

    The args_prompt() function defines text that is echoed to the console if the script is invoked with no arguments, such as:

    ./sample.sh -h -v -I -U -c -s -r -a -o

    Checking for whether parameters were added is done by this code:

    if [ $# -eq 0 ]; then  # display if no parameters are provided:

    A sample response was shown above.

    The USAGE example shows the various parameters that need to be added to specific actions taken by the script.

    This design ensures the flexibility of the script.

    Flags not associated with a text string specification (such as Verbose) default to false and get switched to true when specified.

    Text variables are defined first, then exported in a separate step as recommended by Shellcheck.

Text color codes in messages

The Unix operating system (on which today’s Linux distributions are based) “streams” text to the Console. Colors (colours) and other effects are specified by inserting “toggles” (attributes) that change the appearing of text following it. A reset sets all text to display in the default appearance.

The color and other text attributes described above are specified within functions called to display message text to the console.

On macOS the approach is to define variables containing ANSI escape numbers:


Different Linux distributions and platforms recognize different toggle codes. So use the alternate approach using the tput utility which works on all *nix systems to display attribute variables.

# Set less cryptic color attributes names using tput common to all Linux distributions: 
   blink=$(tput blink)         # 5 as in ANSI 5 in "\e[5m"
   bold=$(tput bold)           # 1
   dim=$(tput dim)             # 2 (faint)
   underline=$(tput smul)      # 4
   end_underline=$(tput rmul)
   reverse=$(tput rev)         # 7
# Foreground colors:
   red=$(tput setaf 1)         # 31
   green=$(tput setaf 2)       # 32
   yellow=$(tput setaf 3)      # 33
   blue=$(tput setaf 4)        # 34
   purple=$(tput setaf 5)      # 35
   cyan=$(tput setaf 6)        # 36
   white=$(tput setaf 7)       # 37
   reset=$(tput setaf 0)       # 39 default
# Background colors:
   b_red=$(tput setb 1)        # 41
   b_green=$(tput setb 2)      # 42
   b_yellow=$(tput setb 3)     # 43
   b_blue=$(tput setb 4)       # 44
   b_purple=$(tput setb 5)     # 45
   b_cyan=$(tput setb 6)       # 46
   b_white=$(tput setb 7)      # 47
   b_reset=$(tput setb 0)      # 49 default
# Reset all to defaults:
   reset=$(tput sgr0)

BTW To test how the codes, put this in a script:

echo "${green}Success! ${dim}dimmed${reset} "
echo "${red}Failure ${bold}bolded${reset}"
echo "${blink}${f_yellow}Caution ${bold}bolded${reset} bad"
echo "${blue}Note${reset} blue on black is annoying"
echo "${underline}${purple}Alert${reset} magenta underlined"
echo "${reverse}${cyan}Info${reset} cyan reversed"
echo "${white}Whatever white${reset} this is"

Custom functions to echo text to screen

To format output, this code is used:

h2() {     # heading
   printf "\n${bold}>>> %s${reset}\n" "$(echo "$@" | sed '/./,$!d')"
info() {   # output on every run
   printf "${dim}\n➜ %s${reset}\n" "$(echo "$@" | sed '/./,$!d')"
note() { if [ "${RUN_VERBOSE}" = true ]; then
   printf "${bold}${cyan} ${reset} ${cyan}%s${reset}\n" "$(echo "$@" | sed '/./,$!d')"
success() {
   printf "${green}✔ %s${reset}\n" "$(echo "$@" | sed '/./,$!d')"
error() {       # ☓
   printf "${red}${bold}✖ %s${reset}\n" "$(echo "$@" | sed '/./,$!d')"
warnNotice() {  # ☛
   printf "${cyan}☛ %s${reset}\n" "$(echo "$@" | sed '/./,$!d')"
warnError() {   # Skull: ☠  # Star: ★ ★ U+02606  # Toxic: ☢
   printf "${red}☢ %s${reset}\n" "$(echo "$@" | sed '/./,$!d')"

“h2” is a homage to HTML heading names. The other functions correspond to the different levels of verbosity used by the log4j library (in the npm aws-code-deploy repo).

The printf command is used instead of echo for compatibility with all versions of Bash.

PROTIP: Notice there are HTML/CSS icons within text, so the file must be stored in UTF-8 format.

bash-scripts-171x139.png -vv sets debugging on to print how the codes look:

h2 "Header here"
info "info"
note "note"
success "success!"
error "error"
warning "warning (warnNotice)"
fatal "fatal (warnError)"

Set “Strict Mode”

At the beginning of the script file is:

set -e  # exits script when a command fails
# set -eu pipefail  # pipefail counts as a parameter

Others are there for convenience, to copy and paste to a specific point in the script where commands need to be visible for debugging:

# set -x to show commands for specific issues.
# set -o nounset

Some put them all in one line:

set -o nounset -o pipefail -o errexit  # "strict mode"

pipefail means that when the program encounters an exit code != 0, the exit code for the pipeline (Bash script) becomes != 0. E.g. pipefail can be useful to ensure curl does-not-exist-aaaaaaa.com | wc -c doesn’t exit with exit code 0..!>

Some toggle tracing on and off by defining export DEBUG=TRUE and add in the code:

if [[ "${DEBUG:-FALSE}" != "FALSE" ]]; then
  set -o xtrace

Operating System Detection

We code shell scripts to operate in macOS and various distributions of Linux so that developers can focus on processing sequence which are similar on all platforms.

uname is supposed to be available on all versions of Linux and macOS.

Darwin is the internal name of the current macOS operating system. It is based on the NeXTSTEP operating system Steve Jobs brought into Apple upon his return to Apple in 1998. [Wikipedia explains its roots in BSD]

brew is the command used by the Homebrew package manager used by macOS.

Different Linux distributions use different file names to store its version information. And different Linux distributions have their own package manager. Thus we need to obtain the PACKAGE_MANAGER used by the script.

# Check what operating system is in use:
   OS_TYPE="$( uname )"
   OS_DETAILS=""  # default blank.
if [ "$(uname)" == "Darwin" ]; then  # it's on a Mac:
elif [ "$(uname)" == "Linux" ]; then  # it's on a Mac:
   if command -v lsb_release ; then
      lsb_release -a

      silent-apt-get-install(){  # "$1" refers to parameter of package to install:
         sudo DEBIAN_FRONTEND=noninteractive apt-get install -qq "$1" < /dev/null > /dev/null
   elif [ -f "/etc/os-release" ]; then
      OS_DETAILS=$( cat "/etc/os-release" )  # ID_LIKE="rhel fedora"
   elif [ -f "/etc/redhat-release" ]; then
      OS_DETAILS=$( cat "/etc/redhat-release" )
   elif [ -f "/etc/centos-release" ]; then
      error "Linux distribution not anticipated. Please update script. Aborting."
      exit 0
   error "Operating system not anticipated. Please update script. Aborting."
   exit 0

apt-get install function

apt-get install commands in this script use a custom function which feeds in the package name to be installed:

silent-apt-get-install "git"

The function is defined where the operating system and package manager is recognized:

silent-apt-get-install(){  # "$1" refers to parameter of package to install:
sudo DEBIAN_FRONTEND=noninteractive apt-get install -qq git htop < /dev/null > /dev/null

DEBIAN_FRONTEND=noninteractive gets rid of “(Reading database … 5%” output.

-qq is there to not request manual confirmation messages such as:

Need to get 260 MB of archives.
After this operation, 308 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y

The -qq parameter combines the effect of the -y (yes) and -q (quiet) parameter, plus more suppression.

The output that remains is from dpkg which was forked by apt-get. So > /dev/null pipes the standard output (stdout) to nothing so you don’t see them. However, you’ll still see error messages, which go out thru stderr.

< /dev/null pipes stdin standard output to nothing. Explained here.

Version of Bash installed

Some commands make use of a more recent version of Bash than the operating system may have by default. So the script updates the bash processor if the -U flag is specified. Follow along manually:

  1. Be at a macOS Terminal.
  2. Test what version of Bash is installed on your Mac by typing this:

    bash --version

    If you see the below, you are using Bash version 3.x, which macOS first released in 2007.

    GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin16)

    In macOS Mojave version, Apple still ships that old thing due to licensing issues.

  3. Install the latest version of the Bash shell, using Homebrew:

    brew install bash

    Bash 4.0 was released in 2009.

    As of this writing, the response is:

    GNU bash, version 5.0.11(1)-release (x86_64-apple-darwin18.6.0)
    Copyright (C) 2019 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software; you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
  4. If you want to see just the version line, pipe the response to the grep utility built into macOS:

    bash --version | grep 'bash'

    Hold down the left Shift key to press the | (called pipe) key at the upper-right of the keyboard.

    grep ‘bash’ filters out lines that do not contain the word “bash” in the response.

Bash Traps

The Bash trap command catches signals so it can execute some commands when appropriate, such as cleaning up temp files before the script finishes, called an exit trap.

cleanup() {
    echo "Cleaning stuff up..."
    trap '' EXIT INT TERM
    exit $err 
sig_cleanup() {
    trap '' EXIT # some shells will call EXIT after the INT handler
    false # sets $?

The above cleanup function is invoked when INT TERM occurs to trigger the function, at the bottom of the script:

trap cleanup EXIT
trap sig_cleanup INT QUIT TERM

This statement in the script…

trap 'ret=$?; test $ret -ne 0 && printf "failed\n\n" >&2; exit $ret' EXIT

Disk space free capacity

We want to know how much disk space is available at the beginning of the run, and the amount of space taken during the run.

On macOS and other BSD operating systems, the “disk free” command df -P / outputs a “standardized” number of 512 byte blocks in the “/” mount:

Filesystem   512-blocks      Used  Available Capacity  Mounted on
/dev/disk1s1 1953595632 521869880 1417651624    27%    /

We use this methodology to obtain the percentage of disk free, which obtains the 12th text item -delimited by a space (which include heading items):

DISK_PCT_FREE=$(read -d '' -ra df_arr < <(LC_ALL=C df -P /); echo "${df_arr[11]}" )

The blocks Available is the 10th text item.

FREE_DISKBLOCKS_START=$(read -d '' -ra df_arr < <(LC_ALL=C df -P /); echo "${df_arr[10]}" )

This uses bash arrays which became available since Bash version 4.

NOTE: We don’t use “-m” for megabytes or “-k” for kilobytes which result in mesuring small amounts of space used as zero.

This captures the starting count:

FREE_DISKBLOCKS_START="$( df . | cut -d' ' -f 6 )"   # e.g. 254781 MB Used

TODO: Within cloud environments such as Amazon AWS EC2 or Azure, this may still be relevant.

df is the disk free command used to obtain the number of blocks Used and Available for each storage device mounted.

. specifies calculation of the number of 512 byte blocks in the current device:

Filesystem   512-blocks      Used  Available Capacity iused               ifree %iused  Mounted on
/dev/disk1s1 1953595632 521825264 1417696240    27% 1480293 9223372036853295514    0%   /

| cut -d' ' -f 6 pipes to cut using a space to -demarkate the 6th column. The response is an integer such as “254781”. Divided by 1024 means 248 Gigabytes.

At the end of the script, another variable is obtained when the END variable is obtained for use in calculating the time and disk space used during the script run.

Utility functions

Shell functions are defined near the beginning of the script for use later in the script.

QUESTION: What are good Bash libraries with common functions? Libraries for bash are not common. One is /etc/rc.d/functions on RedHat-based systems. The file contains functions commonly used in sysV init script.

NOTE: Bash libraries are scarce is due to limitation of Bash functions.

NOTE: Bash’s “functions” have several issues:

Code reusability: Bash functions don’t return anything; they only produce output streams. Every reasonable method of capturing that stream and either assigning it to a variable or passing it as an argument requires a SubShell, which breaks all assignments to outer scopes. (See also BashFAQ/084 for tricks to retrieve results from a function.) Thus, libraries of reusable functions are not feasible, as you can’t ask a function to store its results in a variable whose name is passed as an argument (except by performing eval backflips).

Scope: Bash has a simple system of local scope which roughly resembles “dynamic scope” (e.g. Javascript, elisp). Functions see the locals of their callers (like Python’s “nonlocal” keyword), but can’t access a caller’s positional parameters (except through BASH_ARGV if extdebug is enabled). Reusable functions can’t be guaranteed free of namespace collisions unless you resort to weird naming rules to make conflicts sufficiently unlikely. This is particularly a problem if implementing functions that expect to be acting upon variable names from frame n-3 which may have been overwritten by your reusable function at n-2. Ksh93 can use the more common lexical scope rules by declaring functions with the “function name { … }” syntax (Bash can’t, but supports this syntax anyway).

Script run environment

These commands obtain information about the script’s environment:

HOSTNAME=$( hostname )
PUBLIC_IP=$( curl -s ifconfig.me )

PROTIP: The alternative to curl is wget, which follows redirects.

This script code prints information about the script’s running environment:

      note "Running $0 in $PWD"  # $0 = script being run in Present Wording Directory.
      note "Bash $BASH_VERSION at $LOG_DATETIME"  # built-in variable.
      note "OS_TYPE=$OS_TYPE using $PACKAGE_MANAGER from $DISK_PCT_FREE disk free"
      note "on hostname=$HOSTNAME at PUBLIC_IP=$PUBLIC_IP"
   if [ -f "$OS_DETAILS" ]; then
      note "$OS_DETAILS"

PROTIP: “$0” within Bash scripts returns the script file name.

PROTIP: “$PWD” returns the “Present Working Directory” (current folder path).

Sample response:

   Running ./sample.sh in /Users/wilson_mar/gits/wilsonmar/DevSecOps/bash
  Bash 5.0.11(1)-release at 2020-01-20T00:23:03-0700-1000
  OS_TYPE=macOS using brew from 27% disk free
  on hostname=12345 at PUBLIC_IP=

wilson_mar is my user name on my macOS laptop.


Getting Initial Secrets

Keeping secrets from being exposed is the bane of developers’ existance.

We need to retieve secrets in order to have credentials to access services on the web, such as AWS, Azure, GCP, etc.

Some think that specifying .gitignore or keeping a repo as private is enough to keep secrets safe. But anytime something is on the internet, it can be exposed.

retrieve a .secrets file in your user $HOME folder. Edit the file to contain something like:

# Used by https://raw.githubusercontent.com/wilsonmar/DevSecOps/master/bash/sample.sh
# Explained in https://wilsonmar.github.io/bash-scripts/#KeepingSecrets
GitHub_USER_NAME="John Doe"

-s specified in run parameters for the script to make use of this file.

If that is not specified, or if the file is not found or variable not found, the script falls back to asking for manual input of the variables every run.

In a forthcoming refactoring, we may add use of Hashicorp Vault, which puts another secret in place of the real secret.

Copy Sample files

The particular application has sample files which should be copied, then edited for use.

  • .env.example to .env
  • docker-compose.override.example.yml to docker-compose.override.yml

The script looks for the file name copied by a previous run.

File names on the local machine are specified in the repo’s .gitignore file so they don’t get pushed into GitHub.

GitHub and .gitsecret

If a .gitsecret folder is found in the repo, the script installs gpg and git-secret brew.

TODO: Also detect if https://www.passwordstore.org using brew install pass.

Package Manager install

This script installs the packages managers needed for the operating system under use. brew first requires HomeBrew to be installed (using Ruby).

Read this README which provides someone new to Macs specific steps to configure and run scripts to install apps on Macs. So first finish reading that about “shbangs” and grep for Bash shell versions.

On Macs, XCode needs to be installed for utilities needed by the HomeBrew installer.

Windows on Linux can use either a fork of Homebrew (Linuxbrew) or apt-get/yum. But Linuxbrew installs packages to a unique folder, so that path needs to be added to the search PATH in ~/.bash_profile.

brew –prefix yields “/usr/local”

Ruby Gemfile of versions

The Ruby Gemfile specifies the packages mentioned in the import statement within Ruby programs. The latest version of each package is specified by default. Or a specific version can be specified.

The Gemfile.lock file reflects what Bundler records as the exact versions installed. This way, when the same library/project is loaded on another machine, running bundle install will look at the Gemfile.lock and sinstall the exact same versions, rather than just using the Gemfile and installing the most recent versions. (Running different versions on different machines could lead to broken tests, etc.)

Docker and docker-compose

This script can get you up and running with a DockerHub image, but with the ability to get listings of containers and images without much typing.

This is the case when running -eggplant.

-k installs and uses Docker and docker-compose. It restarts the Docker daemon if it’s already running. Either way, the Docker daemon is started.



-D stops and removes Docker containers still running.

-M removes the images pulled from DockerHub.

-R removes the cloned app repository.



Qwiklabs.com: Automating AWS Services with Scripting and the AWS CLI

Sander van Vugt (LivingOpenSource.com) https://github.com/sandervanvugt/cool-bash

Ian Miell, author of Bash the Hard Way, has a “Bash Next Steps” video course on OReilly which covers Bash 5 features.

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options
  18. AWS Load Balancers

  19. Cloud services comparisons (across vendors)
  20. Cloud regions (across vendors)
  21. AWS Virtual Private Cloud

  22. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  23. Azure Certifications
  24. Azure Cloud

  25. Azure Cloud Powershell
  26. Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
  27. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  28. Azure Networking
  29. Azure Storage
  30. Azure Compute
  31. Azure Monitoring

  32. Digital Ocean
  33. Cloud Foundry

  34. Packer automation to build Vagrant images
  35. Terraform multi-cloud provisioning automation
  36. Hashicorp Vault and Consul to generate and hold secrets

  37. Powershell Ecosystem
  38. Powershell on MacOS
  39. Powershell Desired System Configuration

  40. Jenkins Server Setup
  41. Jenkins Plug-ins
  42. Jenkins Freestyle jobs
  43. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  44. Docker (Glossary, Ecosystem, Certification)
  45. Make Makefile for Docker
  46. Docker Setup and run Bash shell script
  47. Bash coding
  48. Docker Setup
  49. Dockerize apps
  50. Docker Registry

  51. Maven on MacOSX

  52. Ansible

  53. MySQL Setup

  54. SonarQube & SonarSource static code scan

  55. API Management Microsoft
  56. API Management Amazon

  57. Scenarios for load
  58. Chaos Engineering