Wilson Mar bio photo

Wilson Mar

Hello!

Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

Package manager for Kubernetes

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

This article is a hands-on introduction about Helm “Charts” used to stand up apps in a Kubernetes cluster.

Helm simplifies discovering and deploying services to a Kubernetes cluster. Thus, Helm competes with docker-compose. A Helm Chart groups multiple yaml format (Kubernetes objects) into one unit. In yaml files, indents use two spaces (and never tabs).

The contribution of this article is logical commentary that is succinct yet deep.

  1. Visit helm.sh, Helm’s marketing home page (served from https://github.com/helm/helm-www using hugo). It calls helm a package manager for Kubernets (like Brew on macOS, Chocolately on Windows, apt on Debian/Ubuntu, yum on Red Hat, etc). Helm is a cloud industry consortium composed of Google, Microsoft, Bitnami, and others.

    Why Helm?

    Helm has become popular with cloud developers largely because it simplifies Kubernetes application management, the roll out of updates, and options to share applications. Package management features make it easier to:

    • search available packages -
    • provide information on packages
    • download and install packages, along with dependencies, creation of folders, and insertion of those folders in the system’s PATH variable
    • list installed packages
    • lint installed packages
    • update existing installed packages
    • delete packages

    PROTIP: Words in Chart names are separated by dashes, not underlines nor dots.

  2. Visit https://github.com/helm/helm where Helm is open-sourced.

  3. Visit https://helm.sh/docs

    https://helm.sh/docs/glossary

    Architecture

    The Helm CLI client running on your local machine sends requests toward Kubernetes. This CLI client is needed because operations such as rollback, running chart tests, etc. are done from the Helm CLI client.

    Moving from Helm 2 to 3

    Until Helm3 was released November 2019 with Kubernetes 1.16, a Tiller server (and helm init that starts it) ran inside the Kubernetes cluster to manage (install, upgrade, query, and remove) Kubernetes resources via calls to the Kubernetes API server. [1] Helm3 removed Tiller and shifts to Helm itself the security, identity, and authorization features.

    See https://github.com/helm/helm-2to3 for the strangler pattern (co-existing in the same cluster) or in situ (with migration).

    helm3 plugin list

    Helm 3 uses Secrets as the default storage driver instead of Helm 2’s ConfigMaps (default) to store release information.

    See https://helm.sh/docs/community/history/

    The chart dependency management system has moved from requirements.yaml and requirements.lock in Helm 2 to Chart.yaml and Chart.lock in Helm3. An improved upgrade strategy, leveraging three-way strategic merge patches. Helm considers the old manifest, its live state, and the new manifest of when generating a patch.


    Helm CLI client

    On your Terminal on any folder:

    This extends and summarizes https://helm.sh/docs/intro/install

  4. Install Kubernetes first.

    PROTIP: The Helm client learns about Kubernetes clusters by using files in the Kube config file. By default, Helm attempts to find this file in the place where kubectl creates it ($HOME/.kube/config).

  5. See whether you already have it installed:

    helm version

    If you see something like this, you already have it installed:

    version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
    
  6. What is the latest Kubernetes helm CLI client for macOS?

    brew info helm

    Response at time of writing:

    helm: stable 3.2.1 (bottled), HEAD
    The Kubernetes package manager
    https://helm.sh/
    /usr/local/Cellar/helm/3.2.1 (7 files, 43.3MB) *
      Poured from bottle on 2020-05-30 at 03:34:11
    From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/helm.rb
    ==> Dependencies
    Build: go@1.13 ✘
    ==> Options
    --HEAD
    Install HEAD version
    ==> Caveats
    Bash completion has been installed to:
      /usr/local/etc/bash_completion.d
     
    zsh completions have been installed to:
      /usr/local/share/zsh/site-functions
    ==> Analytics
    install: 36,371 (30 days), 110,830 (90 days), 251,843 (365 days)
    install-on-request: 35,636 (30 days), 108,563 (90 days), 246,528 (365 days)
    build-error: 0 (30 days)
    

    History:

    /usr/local/Cellar/helm/3.2.1 (7 files, 43.3MB) *
    /usr/local/Cellar/helm/3.1.1... (7 files, 41.2MB)
    /usr/local/Cellar/helm/3.1.0 (7 files, 41.2MB) *
    
  7. To install helm CLI client for the first time:

    brew install helm

    To upgrade Kubernetes helm CLI client (if brew info returned a version):

    brew upgrade helm

    Sample response:

    ==> Downloading https://storage.googleapis.com/helm/releases/v3.1.0/helm-darwin-amd64
    

    Obtain the version again after an upgrade.

    PROTIP: Helm is written in the Go language, built using the Make utility.

    helm env

  8. Examine your local Helm enviorinment locations:

    helm env

    Notice that the MacOS Library is used as storage locations:

    HELM_BIN="helm"
    HELM_DEBUG="false"
    HELM_KUBECONTEXT=""
    HELM_NAMESPACE="default"
    HELM_PLUGINS="/Users/wilson_mar/Library/helm/plugins"
    HELM_REGISTRY_CONFIG="/Users/wilson_mar/Library/Preferences/helm/registry.json"
    HELM_REPOSITORY_CACHE="/Users/wilson_mar/Library/Caches/helm/repository"
    HELM_REPOSITORY_CONFIG="/Users/wilson_mar/Library/Preferences/helm/repositories.yaml"
    

    NOTE: Helm stores its configuration files in XDG Base directory specifications created the first time helm is run.

    Cache:  $XDG_CACHE_HOME - ${HOME}/.cache/helm/
    Config: $XDG_CONFIG_HOME - ${HOME}/.config/helm/
    Data:   $XDG_CONFIG_HOME - ${HOME}/.local/helm/
    

    Helm 3 puts CRD’s (Custom Resource Definitions) in the “crds” directory and can be skipped using --skip-crds on install. https://github.com/bitnami-labs/helm-crd is not under active development.

    Helm 3 has a GoSDK CLI.

    Open Container Initiative (OCI) with Docker Registry API.

Create new Helm Chart

A Helm “Chart” is a collective noun for a set of folders and files.

  1. Create a new Helm Chart:

    helm create mychart
    

    Optionally, –starter option can be added to specify a “starter chart”.

    Starter Charts are copied to $XDG_DATA_HOME/helm/starters. Chart developers author charts specifically designed to be used as starters. The Chart.yaml of starters are overwritten by the generator. Users will expect to modify such a chart’s contents, so documentation should indicate how users can do so. Starter charts can be used as templates, with all occurrences of CHARTNAME replaced with the specified chart name.

  2. Examine the files created using a Tree command:

    tree
    ├── Chart.yaml
    ├── charts
    ├── templates
    └── values.yaml
       

    Each Helm Chart must contain a Chart.yaml file (with a capital C), a values.yaml file (with a lower case v) override default values with your own information.

    PROTIP: apiVersion is v2 starting with Helm3. Sorry for the confusion.

    Templates

  3. In the templates folder:

    ├── templates
    │   ├── NOTES.txt
    │   ├── _helpers.tpl
    │   ├── deployment.yaml
    │   ├── ingress.yaml
    │   ├── service.yaml
    │   ├── serviceaccount.yaml
    │   └── tests
    │       └── test-connection.yaml
       

    Template yaml files contain placeholders:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: 
      labels:
      annotations:

    From: VIDEO: What is Helm? (with Tiller) Dec 18, 2019 [9:05]: helm-ibm-1151x649.jpg

    Dependencies in requirements.yaml

    The requirements.yaml file specifies a database:

    dependencies:
    - name: mariadb
      version: 0.6.0
      repository: https://kubernetes-charts.storage.googleapis.com
    

    The chart folder is populated by the archive of “dependencies” of other charts with its own set of yaml files.

Lint a Chart

  1. Validate that a Chart follows the conventions and requirements of the Helm chart standard JSON schema

    Linting is automatic with helm install, upgrade, and template. But you can run it anytime:

    helm lint

    Sample output:

    ==> Linting .
    [INFO] Chart.yaml: icon is recommended
     
    1 chart(s) linted, 0 chart(s) failed
       

Specify app

Since Kubernetes works off Docker images, specify the Docker image, such as the simple “ToDo” app:

image:
   repository: prydonius/todo
   tag: latest
   pullPolicy: IfNotPresent
   

The client CLI knows to look in the https://hub.helm.sh public repository.

PROTIP: A Chart release number is an incremental counter that advances forward even on rollback. A Sematic version number (such as 1.2.3) is required on every chart.

  1. Search for the ToDo chart this tutorial uses.

  2. For a list of all apps in Hub:

    help search hub

Search apps

  1. Search for a specific Chart:

    helm search hub vault

    Note that the list contains “stable” and “incubator” editions.

  2. To see logos among publicly available charts, view https://hub.helm.sh, search for “stable” Charts:

    • Anchore, Clair
    • web server Apache, Nginx, Tomcat, WordPress
    • Argo-cd, GitLab
    • Artifactory
    • Databases: Cassandra, Mongodb, CockroadhDB, MySQL, Neo4j, Spark, Spinniker
    • Secrets manager: Consul, Vault
    • Testing tools: JMeter, Selenium
    • Elastic Stack, Logstash, Prometheus, Kibana,
    • Weave

Add repo

helm repo add dev https://hub.helm.sh
   

Install Chart in Kubernetes

This is a summary of https://helm.sh/docs/intro/using_helm/

  1. Run it:

    helm install --name todo ./mychart --set service.type=NodePort
    
  2. Highlight and copy the response to your Clipboard to paste in your local Terminal:

    For example:

    export NODE_PORT=$(kubectl get —-namespace default -o jsonpath="{.spec.ports[0].nodePort}" services todo-mychart)
    export NODE_IP=$(kubectl get nodes —-namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
    echo http://$NODE_IP:$NODE_PORT 
       
  3. Copy and paste the URL in the response (such as http://127.0.0.1:8080) in your browser’s address to see the app’s UI.

    In Kubernetes

  4. See what is running in the Kubernetes cluster:

    helm list --all
    

    Uninstall

  5. To uninstall

    helm uninstall --keep-history

    Package to Archive

  6. Package a Chart folder:

    helm package mychart

    After a Chart is packaged by being tarred and gzipped (compressed/packed) to a .tgz file, and optionally signed, it is called an archive.

    helm verify my-chart-0.1.0.tgz
    

    A Chart may be accompanied by a .prov (provenance) file which details where the chart came from and what it contains. The cryptographic hash (signature OpenPGP “clearsign” block)) of the chart archive file is used to determine whether the chart file has been tampered with.

1) Package a Chart folder:

Ingress

VIDEO: Kubernetes Ingress Explained Completely For Beginners by KodeKloud

Interactive

“Helm Package Manager” on Qwiklabs covers installation and configure a Chart (based in MySQL) on GCP.

Videos

[3] CNCF Webinar Series – Getting Helm to be Enterprise-ready Apr 3, 2018

https://www.youtube.com/watch?v=TJ9hPLn0oAs Create a Helm chart Oct 3, 2019 https://docs.bitnami.com/tutorials/create-your-first-helm-chart

Articles

An Introduction to Helm, the Package Manager for Kubernetes August 6, 2018 by Brian Boucheron

https://www.katacoda.com/courses/docker-production/vault-secrets

Helmsman Desired State Configurator

Open-sourced at https://github.com/Praqma/helmsman, Helmsman from Praqma (by SAMI ALAJRAMI and others) provides an “autopilot” for Kubernetes clusters which automates the lifecycle management of Helm Charts using declarative (desired state) configuration files (DSF) to create, delete, upgrade, and move Kubernetes objects to different namespaces. This approach makes it easier to replicate a CI pipeline. This also takes care of secrets passing (from environment variables to Charts).

https://hub.docker.com/r/praqma/helmsman/

The desired state approach achieves idempotentcy - executing Helmsman several times gets the same result, and continues from failures.

Videos

Venkat’s playlist on Kubernetes includes:

Helm (v1) and Kubernetes Tutorial - Introduction by Matthew Palmer

YouTube playlist

An Introduction to Helm [36:49] by Matt Farina, Samsung SDS & Josh Dolitsky, Blood Orange

Helm 3 Deep Dive a Nov 22, 2019 CNCF [Cloud Native Computing Foundation] video by Helm core maintainers Taylor Thomas (@_oftaylor, Microsoft Azure) and Martin Hickey (@mhickeybot IBM) say the security model is changed. Merge/upgrade does a 3 way compare that also includes cluster live state.

Managing Helm Deployments with Gitops at CERN a CNCF video by Ricardo Rocha, CERN [32:02]

Helm 3: Navigating to Distant Shores by Codefresh

If you have an O’Reilly subscription, the 10 minute “almost-live” hands-on scenario (from Katota) “Get Started with the Helm Package Manager” has you clicking each command and see it executed on an Ubuntu Bash terminal. This scenario teaches you how to use Helm, the package manager for Kubernetes, to deploy Redis.

  1. Wait for Kubernetes to start. Then install it using a curl command.
  2. The scenario is based on version 2 becuase it tells you to update the local cache to sync the latest available packages with the environment:

    helm init
    helm repo update
  3. helm search redis
  4. helm inspect stable/redis to see configuration policies.
  5. To deploy the chart to your cluster:

    helm install stable/redis

  6. List package namespaces installed:

    helm ls

  7. Find out what pods, replication controllers, and services (master and slave) were deployed:

    kubectl get all

  8. List the persistent volumes available:

    kubectl apply -f pv.yaml

    The pod remains in a pending state while the Docker Image is downloaded.

  9. Grant Redis data mount permissions to write:

    chmod 777 -R /mnt/data*

  10. Provide helm with a more friendly name “my-release”:

    helm install –name my-release stable/redis

  11. To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret –namespace default dinky-newt-redis-o jsonpath=”{.data.redis-password}” base64 –decode)
  12. To connect to your Redis server, run a Redis pod that you can use as a client:

    kubectl run –namespace default dinky-newt-redis-client –rm –tty -i –restart=’Never’
    –env REDIS_PASSWORD=$REDIS_PASSWORD
    –image docker.io/bitnami/redis:5.0.7-debian-10-r27 – bash

  13. Connect using the Redis CLI:

    redis-cli -h dinky-newt-redis-master -a $REDIS_PASSWORD redis-cli -h dinky-newt-redis-slave -a $REDIS_PASSWORD

  14. To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward –namespace default svc/dinky-newt-redis-master 6379:6379 & redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD

Cloud vendors

https://www.digitalocean.com/community/tutorials/an-introduction-to-helm-the-package-manager-for-kubernetes

https://docs.aws.amazon.com/eks/latest/userguide/helm.html

https://aws.amazon.com/blogs/startups/from-zero-to-eks-with-terraform-and-helm/

Helm installing Vault

https://www.hashicorp.com/blog/announcing-the-vault-helm-chart/

https://www.vaultproject.io/docs/platform/k8s/helm

https://github.com/hashicorp/vault-helm

https://github.com/hashicorp/consul-helm

Ansible is used in the aws_eks_cluster.py. Compre the Python 3.8 vs. 3.7 versions: diff /usr/local/Cellar/ansible/2.9.6_2/libexec/lib/python3.8/site-packages/ansible/modules/cloud/amazon/aws_eks_cluster.py $HOME/Library/Python/3.7/lib/python/site-packages/ansible/modules/cloud/amazon/aws_eks_cluster.py


More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options
  18. AWS Load Balancers

  19. Cloud services comparisons (across vendors)
  20. Cloud regions (across vendors)
  21. AWS Virtual Private Cloud

  22. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  23. Azure Certifications
  24. Azure Cloud

  25. Azure Cloud Powershell
  26. Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
  27. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  28. Azure Networking
  29. Azure Storage
  30. Azure Compute
  31. Azure Monitoring

  32. Digital Ocean
  33. Cloud Foundry

  34. Packer automation to build Vagrant images
  35. Terraform multi-cloud provisioning automation
  36. Hashicorp Vault and Consul to generate and hold secrets

  37. Powershell Ecosystem
  38. Powershell on MacOS
  39. Powershell Desired System Configuration

  40. Jenkins Server Setup
  41. Jenkins Plug-ins
  42. Jenkins Freestyle jobs
  43. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  44. Docker (Glossary, Ecosystem, Certification)
  45. Make Makefile for Docker
  46. Docker Setup and run Bash shell script
  47. Bash coding
  48. Docker Setup
  49. Dockerize apps
  50. Docker Registry

  51. Maven on MacOSX

  52. Ansible

  53. MySQL Setup

  54. SonarQube & SonarSource static code scan

  55. API Management Microsoft
  56. API Management Amazon

  57. Scenarios for load
  58. Chaos Engineering