Wilson Mar bio photo

Wilson Mar

Hello. Join me!

Email me Calendar Skype call 310 320-7878

LinkedIn Twitter Gitter Google+ Youtube

Github Stackoverflow Pinterest

Container engine for every cloud


Overview

This is a hands-on “deep dive” tutorial with commentary along the way, arranged in a sequence to make this complex material easier to understand quickly.

Why Kubernetes?

Kubernetes is called “container orchestration” software because it automates the deployment, scaling and management of containerized applications*.

“Containerized” microservice apps are dockerized into images pulled from DockerHub or private security-vetted images in Docker Enterprise, Quay.io, or an organization’s own binary repository setup using Nexus or Artifactory. Kubernetes also works with rkt (pronounced “rocket”) containers. But this tutorial focuses on Docker.

Each Kubernetes node has a different IP address.

k8s-container-sets-479x364

Kubernetes automates resilience into containers by abstacting the network and storage of a virtual containers in replaceable “pods”. Each pod can hold one or more Docker containers.

Within a pod, each container has a different port number. But containers share the same IP address, hostname, Linux namespaces, cgroups, storage, and other resources.

Kubernetes replicates Pods across several worker nodes (VM or physical machines).

k8s-arch-ruo91-797x451-104467

The diagram above is referenced throughout this tutorial, particularly in the Details section below. It is by Yongbok Kim who presents animations on his website.

PROTIP: Kubernetes recently added auto-scaling based on metrics API measurement of demand. Before that, Kubernetes manages the instantiating, starting, stopping, updating, and deleting of a pre-defined number of pod replicas based on declarations in *.yaml files or interactive commands.

The number of pods replicated is based on deployment yaml files. Service yaml files specify what ports are used in deployments.

Open Sourced

kubernetes-logo-125x134-15499.png This blog and podcast notes that the Kubernetes logo has 7 sides because its initial developers were Star Trek fans: The predecessor to Kubernetes was called Borg. A key Borg character is called “7 of 9”.

Kubernetes was created inside Google (using the Golang programming language) and used for over a decade before being open-sourced in 2014 to the Cloud Native Computing Foundation (cncf.io).

Kubernetes is often abbreviated as “k8s”, with 8 replacing the number of characters between k and s. Thus, https://k8s.io redirects you to https://kubernetes.io, the home page for the software. PROTIP: The word “Kubernetes” is a registered trademark of the Linux Foundation, which maintains the website https://kubernetes.io and source code at https://github.com/kubernetes/kubernetes

  • v1.0 was committed on July 2015 within GitHub
  • v1.6 was led by a CoreOS developer.
  • v1.7 was led by a Googler.
  • v1.8 is led by a Microsoft employee (Jaice Singer DuMars) after Microsoft joined the CNCF July 2017.

    Its Google heritage means Kubernetes is about scaling for a lot of traffic with redundancies to achieve high availability (HA).

Certification in Kubernetes

On November 8, 2016 CNCF announced their 3-hour task-based Certified Kubernetes Administrator (CKA) and 2-hour Kubernetes Application Developer (CKAD) exams.

CNCF is part of the Linux Foundation, so…

  1. Get an account (Linux Foundation credentials ) at https://identity.linuxfoundation.org.

    It’s a non-profit organization, thus the “.org”.

  2. Login to https://linuxfoundation.org and join as a member for a $100 discount toward certifications.
  3. Go to https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals and pay for the $300 exam or for $199 more take their class.

  4. Use your Linux Foundation credentials to sign-in at examslocal.com, and select either or both of two exams from CNCF:

    • Linux Foundation : Certified Kubernetes Administrator (CKA) - English
    • Linux Foundation : Certified Kubernetes Application Developer (CKAD) - English

  5. Click “Or Sign In With” tab and select “Sign in for exams powered by the Linux Foundation”.

  6. Log in using your preferred account.

  7. Click “Handbook link” to download it.
  8. Select the date, then click OK.
  9. Setup your home computer to take the exam at home using the Chrome extension from “Innovative Exams”, which uses your laptop camera and microphone watching you use a virtual Ubuntu machine.
  10. Take the 180 minute exam

No multiple choice questions.

PROTIP: The Linux Foundation exam focuses on “pure” commands only and excludes add-ons such as OpenStack.

Support in clouds

If you want to pay for Kubernetes support, Red Hat® OpenShift, at https://www.redhat.com/en/technologies/cloud-computing/openshift, enables Docker and Kubernetes for the enterprise by adding external host names and role-based security.

One can run k8s containers in other clouds or within private data centers using OpenStack from RedHat.

Being open-source has enabled Kubernetes to flourish on several clouds*

Competitors

Other orchestration systems for Docker containers:

  • Docker Swarm

    Docker Swarm incorporated Rancher from Rancher Labs.

  • Mesos from Apache, which runs other containers in addition to Docker. K8SM is a Mesos Framework developed for Apache Mesos to use Google’s Kubernetes. Installation.



Container Orchestration Wars (2017) at the Velocity Conf 19 Jun 2017 by Karl Isenberg (@karlfi) of Mesosphere

Kublet

A Kublet agent program is automatically installed when a node is created. Each kubelet is called the “control pane” that runs nodes under its control.

Kublet constantly compares the status of pods against what is declared in yaml files, and starts or deletes pods as necessary to meet the request.

Restarting Kublet itself depends on the operating system (Monit on Debian or systemctl on systemd-based systems).

Master node

Nodes are joined to the master node using the kubeadm join program and command.

The master node itself is crated by the kubeadm init command which establishes folders and invokes the Kubernetes API server. That command is installed along with the kubectl package. There is a command with the same name used to obtain the version. The kubectl get nodes command lists basic information about each node. The describe command provides more detailed information.

API Server

The kubectl client communicates using REST API calls to an API Server which handles authentication and authorization.

Scheduler

The API Server puts nodes in “pending” state when it sends requests to bring them up and down to the Scheduler to do so only when there are enough resources available. The scheduler can operate according to a schedule. But whether it does or not are defined in rules obayed by the Scheduler about nodes called “Taints”. Rules obayed by the Scheduler about pods are called “Tolerances”. Such details are reaveled using the kubectl describe nodes command.

etcd storage

The API Server and Scheduler persists their configuration and status information in a ETCD cluster (from CoreOS).

Kubernetes data stored in etcd includes jobs being scheduled, created and deployed, pod/service details and state, namespaces, and replication details.

It’s called a cluster because, for resiliancy, etcd replicates data across nodes. This is why there is a minimum of two worker nodes per cluster. ???

Controllers

The Node controller assigns a CIDR block to newly registered nodes, then continually monitors node health. When necessary, it taints unhealthy nodes and gracefully evicts unhealthy pods. The default timeout is 40 seconds.

Communications with outside callers occur through a single Virtual IP address (VIP) going through the kube-proxy which load balances traffic to deployments, which are load-balanced sets of pods within each node.

Load balancing among nodes (hosts within a cloud) are handled by third-party port forwarding via Ingress controllers. See Ingress definitions.

An “Ingress” is a collection of rules that allow inbound connections to reach the cluster services.

In Kubernetes the Ingress Controller could be a NGINX container providing reverse proxy capabilities, and the Ingress Resource defines the connection rules.

OpenShift

OpenShift’s Router is instead a HAProxy container (taking the place of NGINX).

This diagram illustrates what OpenShift adds: kubernetes-openshift-502x375-107638

“The primary grouping concept in Kubernetes is the namespace. Namespaces are also a way to divide cluster resources between multiple uses. That being said, there is no security between namespaces in Kubernetes; if you are a “user” in a Kubernetes cluster, you can see all the different namespaces and the resources defined in them.” – from the book: OpenShift for Developers, A Guide for Impatient Beginners by Grant Shipley and Graham Dumpleton.

k8s-openshift-projects-461x277-64498

Projects in OpenShift provide “walls” between namespaces, ensuring that users or applications can only see and access what they are allowed to. OpenShift projects wrap a namespace by adding security annotations which control access to that namespace. Access is controlled through an authentication and authorization model based on users and groups.

Plug-in Network

PROTIP: Kubernetes uses third-party services to handle load balancing and port forwarding through ingress objects managed by an ingress controller.

CNI (Container Network Interface)

Flannel.
Other CNI vendors include Calico, Cilium, Contiv, Weavenet.

HA Proxy cluster

For network resiliency, HA Proxy cluster distributes traffic among nodes.

cAdvisor

To collect resource usage and performance characteristics of running containers, many install a pod containing Google’s Container Advisor (cAdvisor). It aggregates and exports telemetry to an InfluxDB database for visualization using Grafana.

Google’s Heapster is also be used to send metrics to Google’s cloud monitoring console.


Helm charts

The name Kubernetes is the ancient Greek word for people who pilot cargo ships – “helmsman” in English. Thus the nautical references and why Kubernetes experts are called “captain” and why associated products have nautical themes, such as “Helm”.

A Helm chart can be used to quickly create an OpenFaaS (Serverless) cluster on your laptop.

git clone https://github.com/openfaas/faas-netes && cd faas-netes
   kubectl apply -f ./namespaces.yml 
   kubectl apply -f ./yaml_armhf
   

Deploy a scalable web application to Kubernetes using Helm

Topics

  • Infrastructure as code
  • Manage containers
  • Naming and discovery
  • Mounting storage systems
  • Balancing loads
  • Rolling updates
  • Distributing secrets/config
  • Checking application health
  • Monitoring resources
  • Accessing and ingesting logs
  • Replicating application instances
  • Horizontal autoscaling
  • Debugging applications

Containers are declared by yaml such as this to run Alphine Linux Docker container:

apiVersion: v1
kind: Pod
metadata:
  name: alpine
  namespace: default
spec:
  containers:
  - name: alpine
    image: alpine
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
   

Kubernetes is written in the Go language, so it can run on Windows, Linux, and MacOS (the need to install a JVM).

Raspberry Pi

Read how the legendary Scott Hanselman built Kubernetes on 6 Raspberry Pi nodes, each with a 32GB SD card to a 1GB RAM ARM chip (like on smartphones).

Hansel talked with Alex Ellis (@alexellisuk) keeps his instructions with shell file updated for running on the Pis to install OpenFaaS.

CNCF Ambassador Chris Short developed the rak8s (pronounced rackets) library to make use of Ansible.

Others:

  • https://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/
  • https://blog.sicara.com/build-own-cloud-kubernetes-raspberry-pi-9e5a98741b49

Architecture diagram

Yongbok Kim (who writes in Korean) posted (on Jan 24, 2016) a master map of how all the pieces relate to each other:
Click on the diagram to pop-up a full-sized diagram: k8s_details-ruo91-350x448.jpg

BTW What are now called “nodes” were previously called minions. Apparently Google namers forgot about the existance of NodeJs, which refers to nodes differently.

Testing

End-to-end tests by those who develop Kubernetes are coded in Ginko and Gomega (because Kubernets is written in Go).

The Kubtest suite builds, stages, extracts, and brings up the cluster. After testing, it dumps logs and tears down the test rig.

Social

  • Twitter: @kubernetesio
  • https://slack.k8s.io
  • Google+ Group: Kubernetes
  • https://groups.google.com/forum/#!forum/kubernetes-announce for announcements
  • https://groups.google.com/forum/#!forum/kubernetes-dev for contributors to the Kubernetes project to discuss design and implementation issues.
  • https://stackoverflow.com/search?q=k8s+or+kubernetes for developers
  • https://serverfault.com/search?q=k8s+or+kubernetes for sysadmins.
  • https://groups.google.com/forum/#!forum/kubernetes-sig-scale
  • https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ&disable_polymer=true Kubernetes Google Community video chats

  • https://cloud.google.com/support/docs/issue-trackers to report bugs

  • KubeCon.io Conferences (#KubeConio)

Installation options

There are several ways to obtain a running instance of Kubernetes.

Rancher is a deployment tool for Kubernetes that also provides networking and load balancing support. Rancher initially created it’s own framework (called Cattle) to coordinate Docker containers across multiple hosts, at a time when Docker was limited to running on a single host. Now Rancher’s networking provides a consistent solution across a variety of platforms, especially on bare metal or standard (non cloud) virtual servers. In addition to Kubernetes, Rancher enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos upstream project for DCOS (Data Center Operating System). Rancher eventually become part of Docker Swarm.

Within https://github.com/kubernetes/kops KOPS

Minikube offline

B) Minikube spins up a local environment on your laptop.

NOTE: Ubuntu on LXD offers a 9-instance Kubernetes cluster on localhost.

PROTIP: CAUTION your laptop going to sleep may ruin minikube.

Server install

C) install Kubernetes natively on CentOS.

D) Pull an image from Docker Hub within a Google Compute or AWS cloud instance.

CAUTION: If you are in a large enterprise, confer with your security team before installing. They often have a repository such as Artifactory or Nexus where installers are available after being vetted and perhaps patched for security vulnerabilities.

See https://kubernetes.io/docs/setup/pick-right-solution

Minikube

Minikube goes beyond Docker For Mac (DFM) and Docker for Windows (DFW) and includes a node and a Master when it spins up in a local environment (such as your laptop).

CAUTION: At time of writing, https://github.com/kubernetes/minikube has 257 issues and 20 pending Pull Requests.

  1. Install on a Mac Docker:

    
    brew install docker-machine-driver-xhyve
    
  2. Install on a Mac Minikube:

    
    brew install minikube -y
    
  3. Verify if its command works by getting the version:

    minikube version
  4. Show the current context:

    
    kubectl config current-context
    </pre>
    
    The response on minikube is "minikube".
    
    
  5. Start the service:

    On Mac:

    minikube start --vm-driver=xhyve
    

    On Windows:

    minikube start --vm-driver=hyperv
    
  6. Dashboard

    minikube dashboard
  7. Stop the service:

    minikube stop
  8. Recover space:

    minikube delete

    Kubectl 1.8 scale is now the preferred way to control graceful delete.

    Kubectl 1.8 rollout and rollback now support stateful sets ???

    kubectl CLI client install

    Kubernetes administrators use the kubectl (kube + ctl) the CLI tool running outside Kubernetes servers to control them. It’s automatically installed within Google cloud instances, but on Macs clients:

  9. Install on a Mac:

    
    brew install kubectl -y
    
    🍺  /usr/local/Cellar/kubernetes-cli/1.8.3: 108 files, 50.5MB
    
  10. Verify

    
    kubectl version --client
    

    A sample response:

    Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:27:55Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
    
    1. Check the status of the job using the kubectl describe command.

    2. When a job is complete, view its results:

    kubectl logs counter

    The API Server routes several kinds of yaml declaration files: Pod, Deployment of pods, Service, Job, Configmap.

    API primatives ???

https://plugins.jetbrains.com/plugin/10485-kubernetes

CentOS

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
   

Also:

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
   

Ubuntu

  1. On Ubuntu, install:

    apt install -y docker.io
  2. To make sure Docker and Kublet are using the same systemd driver:

    cat <<EOF >/etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
  3. Install the keys:

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  4. sources:

    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    <deb http://apt.kubernetes.io/ kubernetes-xenial main
    <EOF
  5. To download new sources:

    apt update
  6. To download the programs:

    apt install -y kubelet kubeadm kubectl

Details

This section further explains the architecture diagram above.

As described by the Linux Academy’s CKA course – 05:34:43 of videos by Chad Miller (@OpenChad) provides this sequence of commands

  1. Select “CloudNativeKubernetes” sandboxes.
  2. Select the first instance as the “Kube Master”.
  3. Login that server (user/123456).
  4. Change the password as prompted on the Ubuntu 16.04.3 server.

    Deploy Kubernetes master node

  5. Use this command to deploy the master node which controls the other nodes. So it’s deployed first which invokes the API Server

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    kubernetes-nodes-363x120-20150

    The address is the default for Flannel.

    Flow diagram

    k8s-services-flow-847x644-100409

    The diagram above is by Walter Liu

    Flannel for Minikube

    When using Minikube locally, a CNI () is needed. So setup Flannel from CoreOS using the open source Tectonic Installer (@TectonicStack). It configures a IPv4 “layer 3” network fabric designed for Kubernetes.

    The response suggests several commands:

  6. Create your .kube folder:

    mkdir -p $HOME/.kube
  7. Copy in a configuration file:

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  8. Give ownership of “501:20”:

    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  9. Make use of CNI:

    sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube.flannel.yml

    The response:

    clusterrole "flannel" created
    clusterrolebinding "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel.cfg" created
    daemonset "kube-flannel.ds" created
    

    configmaps in cfg files are used to define environment variables.

  10. List pods created:

    kubectl get pods --all-namespaces -o wide

    Specifying wide output adds the IP address column

    Included are pods named:

    • api server (aka “master”) accepts kubectl commands
    • etcd (cluster store) for HA (High Availability) in control pane
    • controller to watch for changes and maintain desired state
    • dns (domain name server)
    • proxy load balances across all pods in a service
    • scheduler watches api server for new pods to assign work to new pods

    System administrators control the Master node UI in the cloud or write scripts that invoke kubectl command-line client program that controls the Kubernetes Master node.

    Proxy networking

    The Kube Proxy communicates only with Pod admin. whereas Kubelets communicate with individual pods as well.

    Each node has a Flannel and a proxy.

    The Server obtains from Controller Manager ???

  11. Switch to the webpage of servers to Login to the next server.
  12. Be root with sudo -i and provide the password.
  13. Join the node to the master by pasting in the command captured earlier, as root:

    kubeadm join --token ... 172.31.21.55:6443 --discovery-token-ca-cert-hash sha256:...

    Note the above is one long command. So you may need to use a text editor.

    Deployments manage Pods.

    Every Pod has a unique IP. There is one IP Address per Pod. In other words, containers within a Pod share a network namespace.

    Every container has its own unique port number within its pod’s IP.

    kubernetes-ports-381x155-19677

  14. Switch to the webpage of servers to Login to the 3rd server.
  15. Again Join the node to the master by pasting in the command captured earlier:
  16. Get the list of nodes instantiated:

    kubectl get nodes
  17. To get list of events sorted by timestamp:

    kubectl get events --sort-by='.metadata.creationTimestamp'
  18. Create the initial log file so that Docker mounts a file instead of a directory:

    touch /var/log/kube-appserver.log
    
  19. Create in each node a folder:

    mkdir /srv/kubernetes
    
  20. Get a utility to generate TLS certificates:

    brew install easyrsa
    
  21. Run it:

    ./easyrsa init-pki
    

    Master IP address

  22. Run it:

    MASTER_IP=172.31.38.152
    echo $MASTER_IP
    
  23. Run it:

    ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`* build-ca nopass
    

    Watchers

    To register watchers on specific nodes.??? Kubernetes supports TLS certifications for encryption over the line.

    REST API CRUD operations are used For authorization, Kubernetes supports Role Base Access Control (RBAC), (ABAC), and Webhook. Admission ???

  24. Put in that folder (in each node):

    • basic_auth.csv user and password
    • ca.crt - the certificate authority certificate from pki folder
    • known_tokens.csv kublets use to talk to the apiserver
    • kubecfg.crt - client cert public key
    • kubecfg.key - client cert private key
    • server.cert - server cert public key from issued folder
    • server.key - server cert private key

  25. Copy from API server to each master node:

    
    cp kube-apiserver.yaml  /etc/kubernetes/manifests/
    

    The kublet compares its contents to make it so, uses the manifests folder to create kube-apiserver instances.

  26. For details about each pod:

    
    kubectl describe pods
    

    Expose

    Deploy service

  27. To deploy a service:

    kubectl expose deployment *deployment-name* [options]

### Volumes

Containers also share attached data volumes available within each Pod.

Kubelet agents

HAProxy VRRP (Virtual Router Redundancy Protocol) http://searchnetworking.techtarget.com/definition/VRRP automatically assigns available Internet Protocol routers to participating hosts.

A Persistent Volume (PV) is a provisioned block of storage for use by the cluster.

A Persistent Volume Claim (PVC) is a request for that storage by a user, and once granted, is used as a “claim check” for

Recycling policies are Retain (keep the contents) and Recycle (Scrub the contents).

configmap

Activities

  1. To drain a node out of service temporarily for maintenance:

    kubectl drain node3.mylabserver.com --ignore-daemonsets

    DaemonSets

    daemonsets (ds)

    Usually for system services or other pods that need to physically reside on every node in the cluster, such as for network services. They can also be deployed only to certain nodes using labels and node selectors.

  2. To return to service:

    kubectl uncordon node3.mylabserver.com

Sample micro-service apps

The repo is based on work from others, especially Kelsy Hightower, the Google Developer Advocate.

  • https://github.com/kelseyhightower/app - an example 12-Factor application.
  • https://hub.docker.com/r/kelseyhightower/monolith - Monolith includes auth and hello services.
  • https://hub.docker.com/r/kelseyhightower/auth - Auth microservice. Generates JWT tokens for authenticated users.
  • https://hub.docker.com/r/kelseyhightower/hello - Hello microservice. Greets authenticated users.
  • https://hub.docker.com/r/ngnix - Frontend to the auth and hello services.

These sample apps are manipulated by https://github.com/kelseyhightower/craft-kubernetes-workshop

  1. Install
  2. Create a Node.js server
  3. Create a Docker container image
  4. Create a container cluster
  5. Create a Kubernetes pod
  6. Scale up your services

  7. Provision a complete Kubernetes cluster using Kubernetes Engine.
  8. Deploy and manage Docker containers using kubectl.
  9. Break an application into microservices using Kubernetes’ Deployments and Services.

This “Kubernetes” folder contains scripts to implement what was described in the “Orchestrating the Cloud with Kubernetes” hands-on lab which is part of the “Kubernetes in the Google Cloud” quest.

Infrastructure as code

  1. Use an internet browser to view

    https://github.com/wilsonmar/DevSecOps/blob/master/Kubernetes/k8s-gcp-hello.sh

    The script downloads a repository forked from googlecodelabs: https://github.com/wilsonmar/orchestrate-with-kubernetes/tree/master/kubernetes

    Declarative

    This repository contains several kinds of .yaml files, which can also have the extension .yml. Kubernetes also recognizes .json files, but YAML files are easier to work with.

    The files are call “Manifests” because they declare the desired state.

  2. Open an internet browser tab to view it.

    reverse proxy to front-end

    The web service consists of a front-end and a proxy served by the NGINX web server configured using two files in the nginx folder:

    • frontend.conf
    • proxy.conf

    These are explained in detail at https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-14-04-droplet

    SSL keys

    SSL keys referenced are installed from the tls folder:

    • ca-key.pem - Certificate Authority’s private key
    • ca.pem - Certificate Authority’s public key
    • cert.pem - public key
    • key.pem - private key

Kind yaml files

The kinds of yaml files:

### Deployments

  • auth.yaml
  • frontend.yaml
  • hello-green.yaml
  • hello-canary.yaml
  • hello.yaml

### pods

  • healthy-monolith.yaml configures “livenessProbe” (in folder healthz) and “readinessProbe” (in folder readiness) on port 81
  • monolith.yaml
  • secure-monolith.yaml

### services samples

  • auth.yaml
  • frontend.yaml
  • hello-blue.yaml
  • hello-green.yaml
  • hello.yaml
  • monolith.yaml

Label

How Google Kubernetes Engine works

kubernetes-pods-599x298-35069

https://google-run.qwiklab.com/focuses/639?parent=catalog

PROTIP: For GKE we disable all legacy authentication, enable RBAC (Role Based Access Control), and enable IAM authentication.

Pods are defined by a manifest file read by the apiserver which deploys nodes.

Pods go into “succeeded” state after being run because pods have short lifespans – deleted and recreated as necessary.

The replication controller automatically adds or removes pods to comply with the specified number of pod replicas declared are running across nodes. This makes GKE “self healing” to provide high availability and reliability with “autoscaling” up and down based on demand.

In this diagram:

From the https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

  1. List all pods, including in the system namespace:

    
    kubectl get nodes --all-namespaces
    

pod.yml manifests

An example (cadvisor):

apiVersion: v1
kind: Pod
metadata:
  name:   cadvisor
spec:
  containers:
    - name: cadvisor
      image: google/cadvisor:v0.22.0
      volumeMounts:
        - name: rootfs
          mountPath: /rootfs
          readOnly: true
        - name: var-run
          mountPath: /var/run
          readOnly: false
        - name: sys
          mountPath: /sys
          readOnly: true
        - name: docker
          mountPath: /var/lib/docker
          readOnly: true
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      args:
        - --profiling
        - --housekeeping_interval=1s
  volumes:
    - name: rootfs
      hostPath:
        path: /
    - name: var-run
      hostPath:
        path: /var/run
    - name: sys
      hostPath:
        path: /sys
    - name: docker
      hostPath:
path: /var/lib/docker
   

Replication rc.yml

The rc.yml (Replication Controller) defines the number of replicas and

apiVersion: v1
kind: ReplicationController
metadata:
  name: cadvisor
spec:
  replicas: 5
  selector:
     app hello
  template:
    metadata:
      labels:
        app: hello-world
  spec:
    containers:
    - name: hello
      image: account/image:latest
      ports:
        containerPort: 8080
   
  1. Apply replication:

    
    kubectl apply -f rc.yml
    

    The response expected:

    replicationcontroller "hello" configured
    
  2. List, in wide format, the number of replicated nodes:

    
    kubectl get rc -o wide
    
    DESIRED, CURRENT, READY
    
  3. Get more detail:

    
    kubectl describe rc
    

Service rc.yml

The svc.yml defines the services:

apiVersion: v1
kind: Service
metadata:
  name: hello-svc
    labels:
      app: hello-world
spec:
  type: NodePort
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: hello-world
   

PROTIP: The selector should match the pods.xml.

One type of service is load balancer within a cloud:

apiVersion: v1
kind: Service
metadata:
  name: la-lb-service
spec:
  selector:
    app: la-lb
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9376
  type: LoadBalancer
  clusterIP: 10.0.171.223
  loadBalancerIP: 78.12.23.17
   
  1. To create services:

    
    kubectl create -f svc.yml
    

    The response expected:

    service "hello-svc" created
    
  2. List:

    
    kubectl get svc
    
  3. List details:

    
    kubectl describe svc hello-svc
    
  4. List end points addresses:

    
    kubectl describe ep hello-svc
    

OpenShift routes to services

Services can be referenced by external clients using a host name such as “hello-svc.mycorp.com” by using OpenShift Enterprise, which uses “routes” that defines the rules the HAProxy applies to incoming connections.

Routes are deployed by an OpenShift Enterprise administrator as routers to nodes in an OpenShift Enterprise cluster. To clarify, the default Router in Openshift is an actual HAProxy container providing reverse proxy capabilities.

Deploy yml

The deploy.yml defines the deploy:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
  spec:
    containers:
    - name: nginx
      image: nginx:1.7.9
      ports:
      - containerPort: 80
        protocol: TCP
    nodeSelector:
      net: gigabit
   

Deployment wraps around replica sets, a newer version of doing rolling-update on Replication Controller. Old replica sets can revert roll-back by just changing the deploy.yml file.

PROTIP: Don’t run apt-upgrade within containers, which breaks the image-container relationship controls.

  1. Retrieve the yaml for a deployment:

    kubectl get deployment nginx-deployment -o yaml

    Notice the “RollingUpdateStrategy: 25% max unavilable, 25% max surge”.

  2. Begin rollout of a new desired version from the command line:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.8

    Alternately, edit the yaml file to nginx:1.9.1 and:

    kubectl apply -f nginx-deployment.yaml
  3. View Rollout a new desired version:

    kubectl rollout status deployment/nginx-deployment
  4. Describe the yaml for a deployment:

    kubectl describe deployment nginx-deployment
  5. List the DESIRED, CURRENT, UP-TO-DATE, AVAILABLE:

    kubectl get deployments 
  6. List the DESIRED, CURRENT, UP-TO-DATE, AVAILABLE:

    kubectl get deployments 
  7. List the history:

    kubectl rollout history deployment/nginx-deployment --revision=3
  8. Backout the revision:

    kubectl rollout undo deployment/nginx-deployment --to-revision=2

Security Context

The security.yml defines a secrurity context pod:

apiVersion: v1
kind: Pod
metadata:
  name: security-context.pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  volumess:
  - name: sam-vol
    emptyDir: {}
  containers:
  - name: sample-container
    image: gcr.io/google-samples/node-hello:1.0
    volumeMounts:
    - name: sam-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
   
  1. Create the pod:

    kubectl create -f security.yaml

    This can take several minutes.

  2. Enter the security context:

    kubectl exec -it security-context-pod -- sh
  3. See the users:

    ps aux
  4. See that the group is “2000” as specified:

    cd /data && ls -al
  5. Exit the security context:

    exit
  6. Delete the security context:

    kubectl delete -f security.yaml

Kubelet Daemonset.yaml

Kubelets instantiate pods – each a set of containers running under a single IP address, the fundamental units nodes.

A Kubelet agent program is installed on each server to watch the apiserver and register each node with the cluster.

PROTIP: Use a DaemonSet when running clustered Kubernetes with static pods to run a pod on every node. Static pods are managed directly by the kubelet daemon on a specific node, without the API server observing it.

  • https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected.

Deleting a DaemonSet will clean up the Pods it created. Some typical uses of a DaemonSet are:

  • running a cluster storage daemon, such as glusterd, ceph, on each node.
  • running a logs collection daemon on every node, such as fluentd or logstash.
  • running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia gmond.
  1. Start kubelet daemon:

    
    kubelet --pod-manifest-path=the directory 
    

    This periodically scans the directory and creates/deletes static pods as yaml/json files appear/disappear there.

    Note: Kubelet ignores files starting with a dot when scanning the specified directory.

    PROTIP: By default, Kubelets exposes endpoints on port 10255.

    Containers can be Docker or rkt (pluggable)

    /spec, /healthz reports status.

The container engine pulls images and stopping/starting containers.

  • https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

CNI

The Controller Network Interface (CNI) is installed using basic cbr0 using the bridge and host-local CNI plugins.

The CNI plugin is selected by passing Kubelet the command-line option:

   --network-plugin=cni 
   

See https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/

Learning resources

Nigel Poulton (@NigelPoulton, nigelpoulton.com)

Make your own K8s

Kelsey Hightower, in https://github.com/kelseyhightower/kubernetes-the-hard-way, shows the steps of how to create Compute Engine yourself:

  • Cloud infrastructure firewall and load balancer provisioning
  • setup a CA and TLS cert gen.
  • setup TLS client bootstrap and RBAC authentication
  • bootstrap a HA etcd cluster
  • bootstrap a HA Kubernetes Control Pane
  • Bootstrap Kubernetes Workers
  • Config K8 client for remote access
  • Manage container network routes
  • Deploy clustesr DNS add-on

O’Reilly book Kubernetes adventures on Azure, Part 1 (Linux cluster) Having read several books on Kubernetes, Ivan Fioravanti, writing for Hackernoon, says it’s time to start adventuring in the magical world of Kubernetes for real! And he does so using Microsoft Azure. Enjoy the step-by-step account of his escapade (part 1).

Qwiklab

https://run.qwiklab.com/searches/lab?keywords=Build%20a%20Slack%20Bot%20with%20Node.js%20on%20Kubernetes&utm_source=endlab&utm_medium=email&utm_campaign=nextlab

The 8 labs covering 8 hours of the Kubernetes in the Google Cloud Qwiklab quest

References

by Adron Hall:

Julia Evans

  • https://jvns.ca/categories/kubernetes/

Drone.io

http://www.nkode.io/2016/10/18/valuable-container-platform-links-kubernetes.html

https://medium.com/@ApsOps/an-illustrated-guide-to-kubernetes-networking-part-1-d1ede3322727

https://cloud.google.com/solutions/heterogeneous-deployment-patterns-with-kubernetes

https://cloud.google.com/solutions/devops/

https://docs.gitlab.com/ee/install/kubernetes/gitlab_omnibus.html

https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html

https://devops.college/the-journey-from-monolith-to-docker-to-kubernetes-part-1-f5dbd730f620

Social

https://kubernetes.io/community/

Videos

Kubernetes Deconstructed Dec 15, 2017 [33:14] by Carson Anderson of DOMO (@carsonoid)

Solutions Engineering Hangout: Terraform for Instant K8s Clusters on AWS EKS by HashiCorp

Introduction to Microservices, Docker, and Kubernetes by James Quigley

http://bit.ly/2KabhKB Kubernetes in Docker for Mac April 17, 2018 by Guillaume Rose, Guillaume Tardif

More

This is one of a series on AI, Machine Learning, Deep Learning, Robotics, and Analytics:

  1. AI Ecosystem
  2. Machine Learning
  3. Testing AI

  4. Microsoft’s AI
  5. Microsoft’s Azure Machine Learning Algorithms
  6. Microsoft’s Azure Machine Learning tutorial
  7. Microsoft’s Azure Machine Learning certification

  8. Python installation
  9. Juypter notebooks processing Python for humans

  10. Image Processing
  11. Amazon Lex text to speech

  12. Code Generation

  13. Multiple Regression calculation and visualization using Excel and Machine Learning
  14. Tableau Data Visualization