Wilson Mar bio photo

Wilson Mar

Hello!

Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

for orchestration of containers, especially in clouds, including OpenShift

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

I created this to help me to both prepare for Kubernetes exams and to work as an SRE.

I hope to make this complex material easier to understand quickly.

The aim here is to provide insightful commentary around carefully sequenced hands-on activities automated in a shell script – an immersive step-by-step “deep dive” tutorial aimed to make you productive.

WARNING: I’m restructuring this so that revelations about architecture components and flows are based on yaml and what commands reveal rather than as trivia to be memorized.

NOTE: This article is now a “starter set” actively undergoing additions.

Topics

  • Infrastructure as code (IAC)
  • Manage containers
  • Naming and discovery
  • Mounting storage systems
  • Balancing loads
  • Rolling updates
  • Distributing secrets/config
  • Checking application health
  • Monitoring resources
  • Accessing and ingesting logs
  • Replicating application instances
  • Horizontal autoscaling
  • Debugging applications

Keyword Index Alphabetically

So you can go quickly/directly to terms:

Admission Control, Annotations, APIs, API Server, apply, Auto-scaling, CKAD, Clusters, cm=configmaps, Contexts, CronJobs, Declarative, Discovery, ds=DaemonSets, deployment/, ep=endpoints, Environment Variables, hashes, health checks, Imperative, Init Containers, Kubelet, kube-proxy, Ingress, Jobs, Kubelet, Labels, LoadBalancer, Logging, Metadata, ns=Namespaces, no=Nodes, NodePort, OpenShift, Podspecs, Pods, PVC, Readiness Probes, Liveness Probes, po=Pods, Probes, Persistent Volumes, Port Forwarding, Replication, rs=ReplicaSets, Rollbacks, Rolling Updates, Secrets, Selectors, svc=Services, sa=ServiceAccounts, Service Discovery, sts=StatefulSets, Storage Classes, Taints, Tolerations, Volumes, Workloads API

https://kubernetesbyexample.com:


Why Kubernetes?

With Kubernetes, dev teams can take complete control of production operations in cloud environments – deploy both application code and all the environment settings, at their own cadence, without ceremonies and wait time to coordinate releases.

Kubernetes is called “container orchestration” software because it automates the deployment, scaling and management of containerized applications[Wikipedia].

  • Authentication -> Authorization -> Admission Control
  • Load balancing
  • Mixed operating systems (Ubuntu, Alpine, etc.)
  • Using images in Docker avoids the “it works on my machine” troubleshooting of setup or dependencies
  • Unlike Elastic Beanstalk, the k8s master controls what each of its nodes do

Kubernetes applies principles of the Reactive Manifesto of 2014: reactive-manifesto

Open-Source History

Releases of Kubernetes are listed where Kubernetes open-sourced its source code within GitHub.com:

Kubernetes was created inside Google (using the Golang programming language). Kubernetes was used inside Google for over a decade before being open-sourced in 2014 to the Cloud Native Computing Foundation (cncf.io) collective.

  • v1.0 (first commit within GitHub) was on July 2015, and released on July 21, 2015
  • v1.6 was led by a CoreOS developer
  • v1.7 was led by a Googler
  • v1.8 is led by Jaice Singer DuMars (@jaicesd) after Microsoft joined the CNCF July 2017 VIDEO
  • v1.19 is the current version.

Kubernetes is often abbreviated as k8s (pronounced “kate”), with 8 replacing the number of characters between k and s. Thus, https://k8s.io redirects you to the home page for Kubernetes software:

The website, and the Kubernetes code is maintained by the Linux Foundation, which also owns the registered trademark for the logo of a sailing ship’s wheel.

The word “kubernetes” is the ancient Greek word for people who pilot cargo ships – “helmsman” in English. Thus the nautical references and why Kubernetes experts are called “captain” and why associated products have nautical themes, such as Helm, the package manager for Kubernetes.

kubernetes-logo-125x134-15499.png This blog and podcast revealed that the predecessor to Kubernetes was called “The Borg” becuase initial developers were fans of the “Star Trek Next Generation” TV series. In the series, the “Borg” society subsumes all civilizations it encounters into its “collective”. The logo for Kubernetes inside the 6 sided hexagons representing each Google service has 7 sides. This is because a beloved character in the TV series, played by the curvacious Jeri Ryan, is a converted Borg called “7 of 9”. See Timeline of Kubernetes

landscape.cncf.io


Overview

This tutorial focuses on use of Docker containers as Kubernetes’ Container Runtime Interface (CRI). BTW Kubernetes had worked with rkt (pronounced “rocket”) containers, which provided a CLI for containers as part of CoreOS. Rkt became the first archived project of CNCF after IBM bought Red Hat with its competing “containerd” cri-o technology.

“Containerized” microservice apps are dockerized into images pulled from DockerHub or private security-vetted images in Docker Enterprise, Quay.io, or an organization’s own binary repository setup using Nexus or Artifactory.

CRI-O, Docker, ContainerD support Runc. Runc is the low-level tool which does the heavy lifting of spawning a Linux container. (See CVE-2019-5736).

Kubernetes automates resilience by abstacting the network and storage shared by ephemeral replaceable pods which the Kubernetes Controller replicates to increase capacity.

PROTIP: “The median number of containers running on a single host is about 10.” – Sysdig, April 17, 2017. But there can be up to 100 pods per node (at v1.17)

Kubernetes replicates Pods (the same set of containers in each) across several worker Nodes (VM or physical machines).

Production setups have at least 3 nodes per cluster. K8s supports up to 5,000 node clusters of up to 150,000 pods (at v1.17).

Each set of pods are within a node. Kubernetes assigns each node with a different external IP address.

Containers within the same pod share the same IP address, hostname, Linux namespaces, cgroups, storage Volumes, and other resources. Every container has its own unique port number within its pod’s IP.

In each pod, Service Mesh Istio architecture has an “Envoy proxy” to facilitate the communictions and retry logic from the business logic containers in its pod.

In the illustration below, each pod (each a different color) encapsulates one or more (Docker) container hosts (operating processes, each shown as a circle):

k8s-container-sets-479x364.jpg

In “Kubernetes Un-Scaried” by Phil Taprogge (of Snyk) offers this diagram: k8s-phil-diagram


Glossary - how buzzwords fit together

This diagram is shown at the ending of a small (upcoming) movie logically illustrating how the various glossary terms relate with each other:

k8s-docker


Professional certifications in Kubernetes

Instead of multiple choice questions, K8s exam consists of task-based practical responses while SSH’d into live clusters. Each exam includes one free fail retake.

There is support for other languages other than English.

CKAD Exam Domains

Here is the full text of the CNCF’s exam curriculum

13% Core Concepts (APIs, Create and configure basic pods, namespaces)

  • Understand Kubernetes API primitives

18% Configuration (ConfigMaps, SecurityContexts, Resource Requirements, Create & consume Secrets, ServiceAccounts

10% Multi-Container Pods design patterns (e .g. ambassador, adapter, sidecar)

18% Observability (Liveness & Readiness Probes, Container Logging, Metrics server, Monitoring apps, Debugging)

20% Pod Design (Labels, Selectors, Annotations, Deployments, Rolling Updates, Rollbacks, Rollbacks, Jobs, CronJobs)

13% Services & Networking (NetworkPolicies)

08% State Persistence (Volumes, PersistentVolumeClaims) for storage

k8s-ckad-logo-328x311.jpg

The 35-hour video/on-site course LFD259 $199 upgrade offered with the CKAD exam sign-up covers this series of topics:

  1. Course Introduction
  2. Kubernetes Architecture
  3. Build
  4. Design
  5. Deployment Configuration
  6. Security
  7. Exposing Applications
  8. Troubleshooting

LFD459 is the 3-day on-site equivalent course code.

The Linux Foundation exam focuses only on “pure” Kubernetes commands and excludes add-ons such as OpenStack, Helm, Istio.

PROTIP: LF class materials are distributed in .bz2 format which can be opened on macOS by the Unarchiver

CKA Exam Domains

3-hour Certified Kubernetes Administrator (CKA) exams CNCF first announced November 8, 2016.

19% Core Concepts
12% Installation, Configuration & Validation
12% Security
11% Networking
11% Cluster Maintenance
10% Troubleshooting
08% Application Lifecycle Management
07% Storage
05% Scheduling
05% Logging / Monitoring

+https://github.com/walidshaari/Kubernetes-Certified-Administrator lists links by exam domain.

Certificed Kubernauts.io Practioner (CKP)

https://trainings.kubernauts.sh/ describes a certification offered independently by https://kubernauts.de/en/home/ (@kubernauts in Germany) which also provides free namespaces (using Rancher) at https://kubernauts.sh

CKS Exam Domains

Coming November, 2020 (before the KubeCon North America conference): CKS exam is $300 for 2 hours.

It’s for those who hold a CKA certification.

  • 10% Cluster Setup - Best practice for configuration to control environment access, rights, and platform conformity.
  • 15% Cluster Hardening - to protect K8s API and utilize RBAC
  • 15% System Hardening - to improve the security of OS & Network; restrict access through IAM
  • 20% Minimize Microservice Vulnerabilities - to use various mechanisms to isolate, protect, and control workload.
  • 20% Supply Chain Security - forcontainer-oriented security, trusted resources, optimized container images, CVE scanning
  • 20% Monitoring, Logging, and Runtime Security - to analyse and detect threads

DockerDocker (specifically, Docker Engine) provides operating-system-level virtualization in containers.


Exam Preparations

CAUTION: Whatever resource you use, ensure it is to the version of Kubernetes (e.g., v1.19 as of 1 Sep 2020).

Sign up for exam

CNCF is part of the Linux Foundation, so…

  1. Get an account (Linux Foundation credentials ) at https://identity.linuxfoundation.org. https://myprofile.linuxfoundation.org/

    NOTE: It’s a non-profit organization, thus the “.org”.

    https://docs.linuxfoundation.org/tc-docs/certification/lf-candidate-handbook

    https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad-cks

    https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad

  2. Login to linuxfoundation.org and join as a member for a $100 discount toward certifications.

  3. Go to https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals and pay for the $300 exam plus $199 more if you want to take their class.

    Alternately, if you have a Registration code: https://trainingportal.linuxfoundation.org/redeem

  4. Find dates and times when you’re in a quiet private indoor place where no one else (co-workers) are near.

  5. Use your Linux Foundation credentials to create an account at

    examslocal.com.

  6. Install the Chrome extension used to take exams, verified during exam scheduling.

  7. Pick a date when your Biorythms are positive on Intellectual and Physical, not hitting bottom or crossing from positive to negative:

    https://keisan.casio.com/exec/system/1340246447

  8. Sign-in at examslocal.com. For “Sponsor and exam”, type one of the following:

    • Linux Foundation : Certified Kubernetes Application Developer (CKAD) - English
    • Linux Foundation : Certified Kubernetes Administrator (CKA) - English
    • Linux Foundation : Certified Kubernetes Security (CKS) - English ?

    Click on the list, then Click “Next”.

    Click the buttons in the Checklist form and select time of exam until you get all green like this:

    k8s-checklist

    pod-overview Docs and tutorials from Kubernetes.io.

  9. Click “Or Sign In With” tab and select “Sign in for exams powered by the Linux Foundation”.
  10. Log in using your preferred account.
  11. Click “Handbook link” to download it.

    https://trainingportal.linuxfoundation.org/learn/course/certified-kubernetes-application-developer-ckad/exam/exam

  12. PROTIP: You’ll need a corded (Logitech) webcam (not one built-in).

  13. Setup your home computer to take the exam Compatibility Checka using the Chrome extension from “Innovative Exams”, which uses your laptop camera and microphone watching you use a virtual Ubuntu machine.

    Sample exam questions

  14. https://github.com/dgkanatsios/CKAD-exercises provides sample exam questions.

  15. Practice enough

  16. Use kubectl commands in a Kubernetes cluster 60 minutes at a time within Red Hat’s OpenShift Playground powered by KataKoda. Use the “oc” CLI program.

    The playground environment is pre-loaded with Source-to-Image (S2I) builders for Java (Wildfly), Javascript (Node.JS), Perl, PHP, Python and Ruby. Templates are also available for MariaDB, MongoDB, MySQL, PostgreSQL and Redis.

  17. See 3 preview exam questions (with answer explained) after signing up at https://killer.sh (Killer Shell’s) CKA/CKAD Simulator</a> provides close replica of the CKAD exam browser terminal with 20 CKAD and 25 CKA questions, at 29.99€ for two sessions (before 10% discount). Each session includes 36 hours of access to a cluster environment. They recommend you start the first session when you’re at the beginning of your CKA or CKAD journey.

    Build speed

  18. Practice Keyboard shortcuts for Bash

  19. Get proficient with vim well using vimtutor, so that commands are intuitive (where you don’t have to pause for remembering how to do things in vim).

    Bookmarks to docs

  20. Rather than typing from scratch, copy and paste from pages in Kubernetes.io.

    PROTIP: Create bookmarks in Chrome for links to ONLY kubernetes.io pages

    kubernetes-bookmarks

Day before exam

  1. Arrange to sleep well the night before the exam.
  2. If you travel, make sure you are living in the correct time zone.

  3. Move files from your Downloads and Documents folder.
  4. Clear your desk of papers, books. The proctor will be checking.

Before start of exam questions

  1. Take a shower. Put on a comfortable outfit. Brush your teeth. Make your bed.
  2. Eat proteins rather than carbohydrates and sugar before the exam.
  3. Fill a clear bottle with no labels holding clear liquids (water). You’re not allowed to eat snacks.

  4. Put on music that helps you concentrate. Turn it off before starting the test.

  5. Start calm, not rushed. Be setup and be ready a half hour before the scheduled exam.

  6. You may start your exam up to 15 minutes prior to your scheduled appointment time.

  7. Have your ID out and ready to present to the video camera.

  8. The exam takes 180 minutes (3 hours), so before you start, go to the bathroom.
  9. To the proctor, show your ID and pan all the way around the room.

Start of exam

  1. Customize your terminal for productivity.

  2. 19 questions means less than 10 minutes per question. So avoid getting bogged down on the longer complex questions. First go through all the questions to answer the easiest ones first. Along the way, mark ones you want to go back to.

    NOTE: Although there are 19 objectives, not all objectives planned are in every exam.

  3. PROTIP: Avoid writing yaml by scratch.

  4. Generate a declarative yaml file from an imperative command:

  5. PROTIP: Learn to search within kubernetes.io to copy code.

  6. Create yaml file as well as pod:

    kubectl create -f file.pod.yaml --record
  7. Paste to the Notpad available during the exam. Save commands there for copy rather than retype.

    k -n pluto get all -o wide
    
  8. Use kubectl explain.

  9. Use help as in kubectl create configmap help .

  10. Run a busybox web server to test access externally:

    k run tmp --restart=Never --rm --image=busybox -i -- wget -O- 10.12.2.15
    
  11. Do not delete/remove what you have done! People/robots review your servers after the test.

After exam

  1. Create an Acclaim account.
  2. If you pass the exam (score above 66%), go to acclaim to get your digital badge to post on social media.

    https://trainingportal.linuxfoundation.org/pages/exam-history


Social media communities

Latest videos about K8s

For the most up-to-date information by practioners:

Kubernetes Concepts Explained in 9 minutes! Oct 31, 2019 by Mumshad Mannambeth

Kubcon conferences are held 3 times a year in Asia, Europe, and US from https://events.linuxfoundation.org.

Others:

O’Reilly’s Infrastructure & Ops Superstream Series: Session 3 Oct. 21, 2020: Kubernetes

@EllenKorbes: “Successful Kubernetes Development Workflows”

Jonathan Johnson’s live online training “Kubernetes in Three Weeks” courses through O’Reilly:

  • Part I - Meshing and Observability

  • Part II - Operators and Serverless

  • Part III - CI/CD Pipelines on Kubernetes

Programming Kubernetes (book) Kubernetes Best Practices (book) Kubernetes Up and Running, second edition (book)

Video courses

Research into learning point to “spaced repetition” as the way to get what want to remember in our long-term memory.

Different instructors explain concepts in different logical sequences.

So looking at different video classes provides that.

KodeKloud from Udemy.com

This I think these are the most thoroughly and logically presented tutorials for CKAD and CKA.

I have several tabs open taking it:

  1. The courses is availble for USD $228/year (less FESTIVERJ20) at KodeKloud.com where Videos are presented on KodeKloud.com (using the Teachable.com platform).

  2. The courses can also be purchased at Udemy.com:

  3. Either way purchased, the course includes access to a KataKoda-powered lab environment for one hour at a time.

    The k alias for kubectl is already configured, so type k instead of kubectl.

  4. A “Quiz Portal” invoked from within the labs UI provides challenge questions and answers.

    Some hints reference answer files in folder “/var/answers”, viewed by a command in the Terminal, such as:

    cat /var/answers/answer-ubuntu-sleeper-2.yaml
  5. Within the quiz, some links to solutions to labs on YouTube are broken. So stay on the Udemy UI for Solution videos.

    KodeKloud’s YouTube channel still provides a series for absolute beginners on Git, Ansible, Puppet, Shell, Docker, Kubernetes. https://www.youtube.com/watch?v=QJ4fODH6DXI

  6. Teacher and founder Mumshad Mannambeth (living in Singapore) also created a free work simulator for people to gain “real” work experience at https://kodekloud.com/p/kodekloud-engineer.

  7. For CKA, he also authored https://github.com/mmumshad/kubernetes-the-hard-way (on Virtualbox and Vagrant using Docker instead of containerd) which takes a manual approach to bootstrap a Kubernetes cluster from scratch, for learning to understand each task performed by the automation. The tutorial adapts the original using GCP developed by Kelsey Hightower.

  8. Join the Slack channel for CKAD and CKA students.

For CKA, https://github.com/kodekloudhub/certified-kubernetes-administrator-course

Linux Foundation LFS258

The definitive courses are from the same organization that created the exam.

https://training.linuxfoundation.org/cm/prep

https://training.linuxfoundation.org/cm/prep/?course=LFS258

Ready-for.sh

wget http://bit.ly/LFready -O ready-for.sh
   chmod 755 ready-for.sh
   ./ready-for.sh --help
   # Not for macOS
   

Nina on Logging on Udemy

Docker Tutorial for Beginners [Full Course in 3 Hours].

YouTube channel “Nana’s TechWorld” by entrepreneur Nana Janashia (from Austria) features animated illustrations.

VIDEO intro of Unique Udemy course Logging in Kubernetes with EFK Stack | The Complete Guide covers how to set up K8s clusters from scratch and configure logging with ElasticSearch, Fluentd and Kibana

EdX

edX.org publishes some courses from Linux Academy.

LFS158x: Introduction to Kubernetes

O’Reilly

7h video class over 3 days live course by Sander van Vugt, who, as a Linux expert, provides in-depth CentOS install advice (including SELinux) and files available nowhere else. His diagrams are on a lightboard.

BLAH: O’Reilly’s videos are annoying because you have to move the sound up on every new chapter.

CloudAcademy

CloudAcademy’s 11-hour “Learning Path” course was updated August 27th, 2019 by Logan Rakai.

Its Playground lab enables you to skip all the install details to build this: k8s-cloudacademy-after

PROTIP: A browser-based session times out too quickly and is cumbersome to copy and paste. So use SSH instead.

Prep standalone SSH client on macOS

  1. Open an SSH client Terminal by pressing command+spacebar for the Spotlight, then type “Terminal” and select “Terminal.app”.
  2. Enter your user password if prompted.
  3. Create a folder “k8s-class”, then navigate into it:

    cd .. && mkdir -p k8s-cloud && cd k8s-cloud
  4. Switch to the CloudAcademy lab page. Automatically launched are four EC2 instances in the “us-west-2b” AWS Availability Zone: The “bastion” exposed to a public internet subnet and, within a private subnet, a “k8s-master” t3.micro and two “k8s-node” t3.small. In about 10 minutes, all instance status reach “running” and Alarm Status “finish loading”.
  5. Click the box to the left of “bastion-host”. When “Connect” changes from gray, click it.
  6. Click the PEM file (such as “554282681613.pem”) and save the file in that folder.
  7. Copy the PEM file name and save to your Clipboard.
  8. Switch to the Terminal.
  9. Construct a variable set command because it’s referenced several times:

    PEMF="554282681613.pem"
  10. Set permissions (so your key is not publicly viewable for SSH to work):

    chmod 400 "$PEMF"
  11. Compose the command to connect to your instance by typing and pasteing its Public DNS: first type “ssh -i”, then paste the pem file, then type “ubuntu@” for the user name inside the host, then switch to the EC2 page to copy and paste the “Public DNS (IPv4)” URL:

    ssh -i "$PEMF" ubuntu@ec2-34-210-196-19.us-west-2.compute.amazonaws.com

    The wizard should automatically detects the key you used to launch the instance. But if the response is: “ubuntu@github.com: Permission denied (publickey).”, try to rename file by:

    mv ~/.ssh/config config.sav
  12. Type yes and press Enter when you see:

    The authenticity of host 'ec2-34-210-196-19.us-west-2.compute.amazonaws.com (34.210.196.19)' can't be established.
    ECDSA key fingerprint is SHA256:sg0jaN4L4RX8ZAxGDo/elIf6HFU+H/3OTG4DALwU5Ik.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? 
    

    You should see a prompt such as:

    ubuntu@ip-10-0-128-5:~$

  13. Customize the Terminal environment for your productivity.

  14. Switch to the CloudAcademy.com page and scroll down to the list of commands. If you customized alias k:

    Using the alias setup above, ensure you can see master and nodes:

    k get nodes
  15. Make use of files also at https://github.com/cloudacademy/intro-to-k8s/tree/master/src

    cd src && ls
    10.1-namespace.yaml         5.1-namespace.yaml
    10.2-data_tier_config.yaml  5.2-data_tier.yaml
    10.3-data_tier.yaml         5.3-app_tier.yaml
    10.4-app_tier_secret.yaml   5.4-support_tier.yaml
    10.5-app_tier.yaml          6.1-app_tier_cpu_request.yaml
    1.1-basic_pod.yaml          6.2-autoscale.yaml
    1.2-port_pod.yaml           7.1-namespace.yaml
    1.3-labeled_pod.yaml        7.2-data_tier.yaml
    1.4-resources_pod.yaml      7.3-app_tier.yaml
    2.1-web_service.yaml        8.1-app_tier.yaml
    3.1-namespace.yaml          9.1-namespace.yaml
    3.2-multi_container.yaml    9.2-pv_data_tier.yaml
    4.1-namespace.yaml          9.3-app_tier.yaml
    4.2-data_tier.yaml          9.4-support_tier.yaml
    4.3-app_tier.yaml           metrics-server
    
  16. Create and delete pod (all named “mypod”):

    kubectl create -f 1.1-basic_pod.yaml
    kubectl get pods
    kubectl describe pod mypod | more
    kubectl delete pod mypod
    
  17. Get the “image:” name -internal within the output:

    k describe pod xxx grep -i image
  18. Get the Node name:

    k get pods -o wide

Pluralsight

NOTE: Pluralsight videos can be viewed as a Tivo app on my TV.

Pluralsight has a 14-hour series of videos on CKAD by Dan Wahlin (@danwahlin, codewithdan.com). In chron order:

CAUTION: aws v2 CLI became generally available in Feb 2020 shortly after this course was published.

Nigel Poulton (@NigelPoulton, nigelpoulton.com), Docker Captain:

LinkedIn

“Kubernetes Essential Training: Application Development” by Matt Turner (from England) is hands-on using minikube 1.9.2 and kubernetes-cli 1.18.2 on a Mac:

  • Running a local cluster
  • Running containers
  • Viewing logs
  • Remotely executing commands
  • Orchestrating real-world workloads
  • Batch processing with jobs and cron jobs
  • Managing resource usage
  • Keeping containers secure
  • Advanced deployment patterns
  • Analyzing traffic
  • Extending Kubernetes
  • DRY deployment and debugging tools

Learning Kubernetes (on a Mac) by Karthik Gaekwad (when he was at Oracle) references files in https://github.com/karthequian/Kubernetes/blob/master/CourseHandout.md.

“DevOps Foundations: Transforming the Enterprise Transforming your organization” by Mirco Hering, Global DevOps Practice Lead at Accenture

LinuxAcademy

The CKAD Troubleshooting class is highly recommended.

Udemy

“Learn Kubernetes” provides a tutorial on yaml.

Udemy.com has a CKAD course with Tests updated 09/2020 with 9.5 hours of video. It includes 30-minute lightning rounds to practice the stress of taking the exam. Surviging this gives you confidence.

“Docker and Kubernetes: The Complete Guide” by Stephen Grider. Diagrams for the 21h video uses draw.io accessing https://github.com/StephenGrider/DockerCasts/tree/master/diagrams

ACloud.guru

ACloud.guru CKAD course by William Boyd has 3.5 hours of video organized according to exam domains, 13 hands-on labs, and 3 practice exams based on v1.13.

(ACloud.guru’s Vicky Tanya Seno at Santa Monica College is preparing a course on Kubernetes)

Others on CKAD:

Tips from Tips on preparing for CKAD by Muralidaran Shanmugham

Others on CKA:


Installation

Minikube Alternatives

Instead of minikube, there’s also K3s, Microk8s on Linux, Minishift.

  • KinD (Kubernetes in Docker) https://kind.sigs.k8s.io/ builds K8s clusters out of Docker containers running Docker in Docker, good for integration with a CI/CD pipeline.

NOTE: Kubernetes can use alternative container runtimes to run on top of cri-o, such as RedHat’s podman, LXC.

But let’s start by installing minikube on your laptop.

Minikube install

REF:

Minikube goes beyond older Docker For Mac (DFM) and Docker for Windows (DFW) and includes a node and a Master when it spins up in a local environment (such as your laptop).

CAUTION: At time of writing, https://github.com/kubernetes/minikubehas 257 issues and 20 pending Pull Requests, but we’re using it anyway. MUST READ: Known Issues with Minikube (Ingress and ingress-dns addons are not supported on Linux)

Minikube on Windows

  1. Start Docker before installing/starting minikube:

    systemctl enable --now docker
  2. Verify your Docker container type:

    docker info --format ''

    On macOS, the response is “Linux”.

    On Windows, (pardoxically) make sure Docker Desktop’s container type setting is Linux and not windows. see docker docs on switching container type.

    See https://minikube.sigs.k8s.io/docs/drivers/hyperv/

Minikube on MacOS using Docker Desktop

Docker Desktop install on macOS

NOTE: Docker drivers do not currently support ARM architecture (only AMD64).

  1. Follow Install Docker for Desktop:

  2. If the Docker Desktop icon appears (it’s already installed), right-click on it and shut it down.

    Then upgrade it:

    brew cask upgrade docker
    

    This automatically installs the HyperKit hypervisor for macOS.

    So there is no need to do what older docs say:

    brew install docker-machine-driver-xhyve
    

    Make sure Docker Desktop is running:

    Install Minikube

  3. I do not recommend using curl to obtain a specific back version of Minikube:

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_1.7.2-0_amd64.deb \
    && sudo dpkg -i minikube_1.7.2-0_amd64.deb
    
  4. Install on a Mac Minikube:

    brew install minikube
    
  5. A lot prints out, to get the caveats about what was installed:

    brew info minikube
    
    minikube: stable 1.15.1 (bottled), HEAD
    Run a Kubernetes cluster locally
    https://minikube.sigs.k8s.io/
    /usr/local/Cellar/minikube/1.15.1 (8 files, 62.4MB) *
      Poured from bottle on 2020-11-22 at 11:46:27
    From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/minikube.rb
    License: Apache-2.0
    ==> Dependencies
    Build: go ✘, go-bindata ✘
    Required: kubernetes-cli ✔
    ==> Options
    --HEAD
         Install HEAD version
    ==> Caveats
    Bash completion has been installed to:
      /usr/local/etc/bash_completion.d
     
    zsh completions have been installed to:
      /usr/local/share/zsh/site-functions
    ==> Analytics
    install: 44,822 (30 days), 110,033 (90 days), 415,969 (365 days)
    install-on-request: 37,280 (30 days), 92,684 (90 days), 342,920 (365 days)
    build-error: 0 (30 days)
    

    There is no need to do what older docs say: Make hyperkit the default driver*:

    minikube config set driver hyperkit
  6. Make sure you’re running the version just installed:

    minikube version

    The result:

    minikube version: v1.15.1
    commit: 23f40a012abb52eff365ff99a709501a61ac5876
    
  7. Installation should have created folder:

    ls $HOME/.minikube

    The result:

    addons              ca.pem              certs               key.pem             profiles
    ca.crt              cache               config              logs                proxy-client-ca.crt
    ca.key              cert.pem            files               machines            proxy-client-ca.key
    
  8. PROTIP: Assign permissions to avoid run error:

    sudo chown -R $USER $HOME/.minikube
    chmod -R u+wrx $HOME/.minikube
    

    No response is expected on success.

    Start Minikube with Docker driver

    PROTIP: If you start minikube with sudo you’ll get:

  9. PROTIP: Define this as an alias to your ~/.desktop_profile:

    alias mk8s="minikube delete;minikube start --driver=docker --memory=4096"

    –memory=1990 can be adjusted per instructions below.

    PROTIP: Before starting minikube, minikube delete to avoid this error message:

    💢  Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "docker" driver, which is incompatible with requested "hyperkit" driver.
    💡  Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=docker'
    

    PROTIP: Don’t use sudo minikube or you’ll get this error message:

    ❌  Exiting due to DRV_AS_ROOT: The "hyperkit" driver should not be used with root privileges.

    Alternately, start within Virtualbox *:

    sudo minikube start --memory=4096

    An example of an expected response:

    😄  minikube v1.15.1 on Darwin 10.15.7
    ✨  Using the docker driver based on user configuration
    👍  Starting control plane node minikube in cluster minikube
    🚜  Pulling base image ...
    💾  Downloading Kubernetes v1.19.4 preload ...
     > preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4: 486.35 MiB
    🔥  Creating docker container (CPUs=2, Memory=1990MB) ...
    🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    🔎  Verifying Kubernetes components...
    🌟  Enabled addons: storage-provisioner
    🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    
  10. TODO: Start Docker

    If Docker Desktop is not running, you won’t see the icon at the top of the screen and you’ll get this error:

    🤷  Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: exec: "docker": executable file not found in $PATH
    💡  Suggestion: Install Docker
    📘  Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
    

    An example of a good start:

    🙄  "minikube" profile does not exist, trying anyways.
    💀  Removed all traces of the "minikube" cluster.
    😄  minikube v1.15.1 on Darwin 10.15.7
    ✨  Using the docker driver based on user configuration
    👍  Starting control plane node minikube in cluster minikube
    🔥  Creating docker container (CPUs=2, Memory=1987MB) ...
    🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    🔎  Verifying Kubernetes components...
    🌟  Enabled addons: storage-provisioner, default-storageclass
    🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    
  11. Start the minikube service, with add-ons which are each a pod:

    On Mac:

    minikube start --vm-driver=xhyve --addons=dashboard --addons=metrics-server   --addons="ingress" --addons="ingress-dns"
    </pre>
    
    On Windows:
    
    
    minikube start --vm-driver=hyperv
    
    😄  minikube v1.13.1 on Darwin 10.15.7
    ✨  Using the docker driver based on existing profile
    👍  Starting control plane node minikube in cluster minikube
    🏃  Updating the running docker "minikube" container ...
    🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
    🔎  Verifying Kubernetes components...
    🌟  Enabled addons: default-storageclass, storage-provisioner
    🏄  Done! kubectl is now configured to use "minikube" by default
    
  12. If you plan on doing a lot of work, configure Docker with more memory: The default is 1990MB.

    Click the Docker icon on your Mac, then select “Preferences” then “Resources”:

    k8s-minikube-resources

    TODO: Check how much memory is already being used.

    Slide the appropriate tab to specify a larger number.

    Kubectl CLI install

    NOTE: REF: kubectl CLI (kubernetes-cli) is installed by minikube install.

  13. Install kubectl command:

    sudo apt-get update && sudo apt-get install -y apt-transport-https
    

    kubectl CLI client install

    Kubernetes administrators use kubectl (kube + ctl) the CLI tool running outside Kubernetes servers to control them. It’s automatically installed within Google cloud instances, but on Macs clients:

  14. Install on a Mac:

    brew install kubectl
    
    🍺  /usr/local/Cellar/kubernetes-cli/1.8.3: 108 files, 50.5MB
    1.19.2
    

    It’s required by eksctl and minikube.

  15. Verify the version installed:

    kubectl version --client
    

    At time of writing:

    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
    

    NOTICE that Golang programming is a component.

    If you get this error message:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

Install Docker & Kubernetes on CentOS

  1. Install the the Docker Desktop app

    On CentOS/RHEL 7:

    yum install docker

    On CentOS/RHEL 8, Docker is not installed by default, so there download docker-ce from docker.io:

    https://docks.docker.com/install/linux/docker-ce/centos/

    The Open Container Initiative at https://opencontainers.org defined the image-spec to define how to package contaiiners in a “filesystem bundle” and run them in a container. This ensures comptibility among containers, no matter the originating enviroment.

    Start Minikube within VM

  2. To run minikube within a VM so we will need to use the None (bare-metal) driver. The none driver requires minikube to be run as root, until #3760 can be addressed. To make none the default driver:

    sudo minikube config set vm-driver none
    

    These changes will take effect upon a minikube delete and then a minikube start

    Stop Minikube

  3. Stop the service:

    minikube stop
  4. Recover space:

    minikube delete
    
    🔥  Deleting "minikube" in docker ...
    🔥  Deleting container "minikube" ...
    🔥  Removing /Users/wilson_mar/.minikube/machines/minikube ...
    💀  Removed all traces of the "minikube" cluster.
    

    Kubectl 1.8 scale is now the preferred way to control graceful delete.

    Kubectl 1.8 rollout and rollback now support stateful sets ???

  5. To continue, start minikube again.


Configuration

zzz

Service cluster IPs and ports are found through Docker –link compatible enviornment variables specifying ports opened by the service proxy.

  1. REMEMBER: Unlike k describe xxx, k cluster-info is a single verb:

    kubectl cluster-info

    Example response:

    Kubernetes master is running at https://127.0.0.1:32768
    KubeDNS is running at https://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

Configure Contexts

  1. Show the current context:

    kubectl config current-context
    

    The expected response on minikube is “minikube”.

  2. To avoid “The connection to the server localhost:8080 was refused”

    https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting

    sudo touch $HOME/.kube/config
    sudo chown $USER $HOME/.kube/config
    chmod 600 $HOME/.kube/config
    

    Deleted the old config from ~/.kube and then restarted docker (for macos) and it rebuilt the config folder.

  3. What is in the Kubernetes configuration file showing configuration settings and current context:

    cat $HOME/.kube/config

    Sample response:

    apiVersion: v1
    clusters:
           - cluster:
     certificate-authority: /Users/wilson_mar/.minikube/ca.crt
     server: https://127.0.0.1:32768
      name: minikube
    contexts:
           - context:
     cluster: minikube
     namespace: default
     user: minikube
      name: minikube
    current-context: minikube
    kind: Config
    preferences: {}
    users:
           - name: minikube
      user:
     client-certificate: /Users/wilson_mar/.minikube/profiles/minikube/client.crt
     client-key: /Users/wilson_mar/.minikube/profiles/minikube/client.key
     

    REMEMBER: When a namespace is not specified in yaml, the name “default” is assumed.

  4. The same JSON as in file ~/.kube/config is displayed by:

    kubectl config view
    

PROTIP: If your server is not up, you’ll see this error message when attempting a kubectl command:

The connection to the server 127.0.0.1:32772 was refused - did you specify the right host or port?

Customize Terminal

  1. Save a few seconds typing:

    Resource Creation Tips for the Kubernetes CKA / CKD Certification Exam by John Tucker

    Setup prompt at left

  2. Setup the prompt so it always appear at the left:

    export PS1="\n  \w\[\033[33m\]\n$ "
    

    Setup k aliase

  3. Setup a shorthand alias so you can type “k” instead of kubectl:

    alias k=kubectl
    complete -F __start_kubectl k
    
  4. Setup alias:

    export do="--dry-run=client -o yaml"

    Bash Autocompletion

  5. Save a few seconds by setting up autocompletion. On bash:

    bash completion
    source <(kubectl completion bash) 
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    

    On ZSH:

    source <(kubectl completion zsh)
    echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc
     

    vim indentation

    PROTIP: vim is the only editor available, so learn to search lines in vim (Esc, /, the text to be searched).

    :set shiftwidth=2

    To indent several lines with one command: Esc Shift+V for Visual Line mode, highlight lines, Shift . to shift left, Shift , to shift right.

K command tips and tricks

Its code page has a summary description of:

    "Production-Grade Container Scheduling and Management"
  1. Specify the kubectl command by itself to list its sub-commands.

  2. Specify the kubectl command with –help get info:

    k completion --help

Declarative Kubernetes Commands

K8s recognizes both imperative and declarative yaml files.

Declarative vs. Declarative

REF:

TASK: Create a pod with the ubuntu image to run a container to sleep for 5000 seconds. Modify the file ubuntu-sleeper-2.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-2
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    command:
    - “sleep”
    - “5000”
   

The command can also be written as: ???

command: [ "sleep", "5000" ]
   

This references Dockerfile:

ENTRYPOINT ["python", "app.py"]
CMD ["--color", "red"]
   

NinaK:

K8s namespaces are used to separate resources (network, files, users, processes, IPCs, etc.) into virtual clusters inside a K8s cluster.

  • Nginx-Ingress controller
  • Database (<a href=#shared-db”>shared mysql-service</a> or mongodb-service)
  • Logging: Elastic stack
  • Monitoring

  • Development
  • Staging
  • Blue/Green production

Namespaces provide isolation among different project teams, so they don’t overwrite each other’s definitions. Secrets and ConfigMaps are not shared across namespaces.

Different limits on resources (CPU, RAM, storage) can be defined for each namespace.

Thus, separation of different namespaces are useful for large enterprises.

  1. List where KubeDNS is running:

    k cluster-info

    Out of the box, without creating anything:

    k get ns
    • kube-public contains publically accessible (without auth) ConfigMaps which contain cluster info (kubectl cluster-info)
    • kube-system holds k8s internal system processes (master, kubectl, etc.)
    • kube-node-lease holds heartbeats of nodes and the availability of nodes (lease objects)
    • default holds resources you create
    • kubernetes-dashboard is created only within minikube.

    Minikube Dashboard

  2. Open the Minkube Dashboard server localhost:53764 poped upped on your default browser:

    minikube dashboard
    🔌  Enabling dashboard ...
    🤔  Verifying dashboard health ...
    🚀  Launching proxy ...
    🤔  Verifying proxy health ...
    🎉  Opening http://127.0.0.1:54702/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
    

  3. Escape by pressing ctrl+C.


### Declarative yaml

  1. Declarative yaml to define a new namespace:

    apiVersion: v1   # Object controller version
    kind: Namespace          # Object classification
    metadata:                # Associated data
      labels:
     venue: opera
     watch: cpu
    spec:                    # specific object details
    

    Alternately, imperative commands to define a new namespace:

    kubectl create namespace ticketing
    kubectl label namespace ticketing venue=opera watch=cpu
    kubectl get namespaces
    kubectl get namespace apps-collection -o YAML
    
  2. REMEMBER: List api-resources (not just resources) not bound to a namespace (NOT namespaced) so they can be referenced by named namespaces, such as shared Volumes, nodes:

    k api-resources --namespaced=false
    
  3. On minikube, delete all resources from the default namespace:

    kubectl delete --all pods --namespace=default
    kubectl delete --all deployments --namespace=default
    kubectl delete --all services --namespace=default
    

Kubernetes can manage several namespaces running in each cluster.

“The primary grouping concept in Kubernetes is the namespace. Namespaces are also a way to divide cluster resources between multiple uses. That being said, there is no security between namespaces in Kubernetes; if you are a “user” in a Kubernetes cluster, you can see all the different namespaces and the resources defined in them.” – from the book: OpenShift for Developers, A Guide for Impatient Beginners by Grant Shipley and Graham Dumpleton.

OpenShift project wall namespaces

Red Hat’s OpenShift product adds Projects as “walls” between namespaces, ensuring that users or applications can only see and access what they are allowed to. OpenShift projects wrap a namespace by adding security annotations which control access to that namespace. Access is controlled through an authentication and authorization model based on users and groups.

This diagram illustrates what OpenShift adds: kubernetes-openshift-502x375-107638


Dockerfile to Pod yaml correspondance

k8s-dockerfile-sleep

Imperative one web server:

Klab:

  1. For Docker to create an Nginx web server:+

    docker run --name my-nginx -p 80 nginx:1.19.2

    Pod yaml

  2. For Kubernetes to establish a “naked” pod using the un-deprecated run command (use deployment instead):

    kubectl run my-nginx --port 80 --image=nginx:1.19.2
    apiVersion: v1
    kind: Pod
    metadata:
      labels:
     app: nginx
    spec:
      containers:
             - name: my-nginx
     image: nginx:1.19.2
     ports:
     - containerPort: 80
    

    NOTE: The pod definition above is defined (with an additional indentation) as a template within deployments.

  3. The opposite is “delete pod x”.

  4. List pods

    k get pods
  5. Copy a specific pod name generated to paste in the command to see its logs:

    kubectl logs pod-name
  6. Output log file to a pod (named “pod-x”):

    k logs pod-x | sudo tee ~/opt/answers/mypod.logs
  7. Execute iteractive terminal on a pod with bash installed (most Linux have –bin/sh installed):

    kubectl exec -it pod-name --bin/bash

    Declarative yaml

  8. Generate a declarative yaml file from an imperative command:

    k run redis --image=redis --dry-run=client -o yaml > mypod.yaml
  9. Vi pod.yaml to edit*

    Every K8s yaml file must have these top-level properties:

    apiVersion:
    kind:
    metadata:
    spec:
    
    apiVersion:v1 apps/v1
    kind:Pod
    Servicce
    ReplicaSetDeployment

    kind: abbreviations

    PROTIP: Use abbreviations (in lower case) of basic Kubernetes components to save time typing:

    k get po no svc rs deployment
    abbreviation: pods nodes services replicaset deployment

    REMEMBER: kind: full value must be Title case (first character upper case), singular (not plural).

    REMEMBEER: IRL Admins do not code to work with individual pods, because the whole point of K8s is to automate that chore.

    Admins define abstractions for deployment of images (Docker containers) which define templates (blueprints) for creating pods.

    metadata:

    metadata contains a dictionary indented name: and label:

    spec:

    In spec: is a dictionary item containers: specifying a list/array represented by a dash in front of each item:

      spec:
        containers:
       - name: nginx-containers
         image: nginx
      

    REMEMBER: Under containers:, the dash in front of name is indented.

  10. Create instance by applying yaml -file

    k apply -f mypod.yml
  11. Edit the pod’s yaml file:

    k edit pod mypod.yaml
  12. Extract a declaration yaml file from a running pod:

    k get pod mypod -o yaml > definition.yaml

    But this can be messy because you’ll have to delete all item: lines.

    In vi normal mode, delate 5 lines, including the cursor, 5dd.

  13. A Busybox image contains several apps:

    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox-ready
      namespace: default
    

    kubectl apply makes changes if its subject already exists (the command is declarative?).

    REMEMBER: kubectl create throws an error if the resource already exists, whereas kubectl apply won’t. kubectl create says “create this thing” whereas kubectl apply says “do whatever is necessary (create, update, etc) to make it look like this”.

    The resulting file includes additional annotations.

    Beyond the test: GitOps: ArgoCD monitors GitHub and applies changes to K8s Controller.

kubectl run

  1. Make an imperative command:

    kubectl run --image=nginx web
    
    pod/web created
    
kubectl get pods
   
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          2m59s
   
  1. Details:

    kubectl describe pod web
    
    Name:         web
    Namespace:    default
    Priority:     0
    Node:         minikube/172.17.0.3
    Start Time:   Sun, 04 Oct 2020 07:02:16 -0600
    Labels:       run=web
    Annotations:  &LP;none>
    Status:       Running
    IP:           172.18.0.3
    IPs:
      IP:  172.18.0.3
    Containers:
      web:
     Container ID:   docker://ecd03de690f64202c6bdf35d4b4192e5af32854d9c77093f31136570507cc600
     Image:          nginx
     Image ID:       docker-pullable://nginx@sha256:c628b67d21744fce822d22fdcc0389f6bd763daac23a6b77147d0712ea7102d0
     Port:           &LP;none>
     Host Port:      &LP;none>
     State:          Running
       Started:      Sun, 04 Oct 2020 07:02:49 -0600
     Ready:          True
     Restart Count:  0
     Environment:    &LP;none>
     Mounts:
       /var/run/secrets/kubernetes.io/serviceaccount from default-token-72hc5 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             True 
      ContainersReady   True 
      PodScheduled      True 
    Volumes:
      default-token-72hc5:
     Type:        Secret (a volume populated by a Secret)
     SecretName:  default-token-72hc5
     Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  &LP;none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                  node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type    Reason     Age    From               Message
      ----    ------     ----   ----               -------
      Normal  Scheduled  4m40s  default-scheduler  Successfully assigned default/web to minikube
      Normal  Pulling    4m39s  kubelet, minikube  Pulling image "nginx"
      Normal  Pulled     4m7s   kubelet, minikube  Successfully pulled image "nginx" in 31.950535327s
      Normal  Created    4m7s   kubelet, minikube  Created container web
      Normal  Started    4m7s   kubelet, minikube  Started container web
    

Deploy Replicas for Replication, Rolling Updates

k8s-deployment-rs-1568x584

The ReplicaSet process replaces the older ReplicationController.

ReplicaSets enable deployment of several pods, and check their status as a single unit (replicas).

This enables Load Balancing across several machines for more capacity, redunancy, and rolling updates without downtime.

ReplicaSets monitor the number of pods and create pods to match the number of replicas for the label type requested in the yaml.

The sample ReplicaSet.yml file:

apiVersion: v1
kind: ReplicaSet
metadata:
  name: my-app
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
        pec:
      containers:
      - name: nginx-container
        image: nginx:1.19.2
        ports:
        - containerPort: 80
replicas: 3
selector: 
  matchLabels:
    type: front-end
   

A selector is required within ReplicaSet yaml.

PROTIP: The spec: template: is copied from a pod definition yaml, then indented.

PROTIP: Indent paste using vi

  1. PROTIP: Remember the “.apps” when listing replicasets:

    k get replicasets.apps

  2. Identify the image:

    k describe replicasets.apps replicaset-1 grep -i image:

    Modify replicas to scale

    • Edit the file, then
      k replace -f replicaset-def.yaml

    There are several formats which doesn’t modify the file:

    • k scale –relicas=6 -f replicaset-def.yaml

    • k scale –replicas=6 replicaset myapp-replicaset

    • Scale based on load

Practice test with quiz about pod commands: https://kodekloud.com/courses/kubernetes-certification-course-labs/lectures/12039431

Deployments

To upgrade gradually in a production environment without downtime, do a rolling update.

Deployments make use of Replicasets.

kubectl run --restart=Always      # creates deployment
kubectl run --restart=Never       # creates pod
kubectl run --restart=OnFailure   # creates job
   
  1. List deployments 3 different ways, they all work:

    k get deployment
    k get deployments
    k get deployment.app
    k get deployments.app
    

Practice test with quiz about deployments: https://kodekloud.com/courses/kubernetes-certification-course-labs/lectures/12039434

Services

VIDEO: Nina

Services provide an un-changing IP address to pods in the back-end.

Internal services are only reachable within a cluster.

External services are exposed by Endpoints: (NodePoints).

PROTIP: Services are defined with a port.

REMEMBER: Port numbers in deployment yaml must match port numbers in services yaml.

spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
   
  1. Verify visibility using curl:

    kubectl create -f 2.1-web_service.yaml
    kubectl get services
    kubectl describe service webserver  # copy IP: value 10.108.171.76
    kubectl describe nodes | grep -i address -A 1
    curl 10.0.0.100:3#### (replace #### with the actual port digits)
    

Examples of services :

  • auth.yaml
  • frontend.yaml
  • hello-blue.yaml
  • hello-green.yaml
  • hello.yaml
  • monolith.yaml

https://ravikirans.com/cks-kubernetes-security-exam-study-guide

  1. To show all components in a mongodb app:

    kubectl get all | grep mongodb 

TODO: Configure

kubectl get service

The LoadBalancer type service assigns an EXTERNAL-IP address which accepts external requests.

  1. List the URL:

    minkube service mongo-extress-service

    To text, create a database.

    shared mysql-service yaml ConfigMap

  2. Define a commonly used ConfigMap within a service named “database”:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysel-configmap
    data: 
      db_url: mysel-service.database
    

    REMEMBER: “.database” above references the namespace. [1:15:17]

  3. View

    k get configmap -n my-namespace

Jobs

Batch jobs are supervisor processes that run once and immediately completed.

3 types of jobs:

  • completions=1 & parallelism=1 for non-parallel: one pod is started
  • completions=n & parallelism=m for n fixed completions in parallel
  • completions=1 & parallelism=m for n jobs work queue started until 1 completed (rarely used)

spec: completions: 5 defines the number of pods started within a job.

parallelism: 2 defines

  1. Check the status of jobs

    kubectl get jobs 
    NAME     COMPLETIONS   DURATION   AGE
    somejob   5/5           27s        9m41s
  2. When a job is complete, view its results:

    kubectl logs pod-name

    The API Server authenticates using one of several methods (basic, certificates, tokens, etc.).

    “Authorization” refers to determining whether the requester is allowed to perform based on role (using RBAC).

    The API Server routes several kinds of yaml declaration files: Pod, Deployment of pods, Service, Job, Configmap.

  3. Add

    https://plugins.jetbrains.com/plugin/10485-kubernetes

  4. Delete job after finish:

    ttlSecondsAfterFinished: 20

Misc. List:

kubectl get -n kube-system serviceaccounts
kubectl describe -n kube-system clusterrole system:coredns

QUESTION: Find all pods that have been started with the kubectl run command:

kubectl get pods nginxpod –show-labels grep run

kubectl run pod test –image=nginx –dry-run=client -o jasonpath=’{metadata.labels}’

QUESTON: Create a Cron job that will run ???

Podspecs

Podspecs are yaml files that describe a pod.

apiVersion: v1
kind: Pod
metadata:
  name: busybox-ready
  namespace: default
   

Deleting Pods

k delete pod frontend --grace-period=0 --force

Plug-in manager

  1. Like apt-get, but for use within Kubernetes:

    kubectl krew install tree

    From the krew-index plug repository on the internet.

  2. For a deployment, list its Pods within ReplicaSet:

    kubectl tree deployment ???

Add-ons to Kubernetes

Kubernetes is a platform used for building platforms such as OpenShift, Helm, EKS, CrossPlane.


Cloud Kubernetes Services

Each offering has its own acronym:

  • ECS = Elastic (AWS) Container Service
  • EKS = Elastic Kubernetes Service
  • IKS = IBM cloud Kubernetes Service
  • ACK = Alibaba Cloud Kubernetes
  • DOKS = Digital Ocean
  • OKS = Oracle
  • PKE = Bonzai
  • MKE = D2iQ (Day two iQ) rebranded from Mesos DC/OS meta clusters
  • OKD = OpenShift (Red Hat) Enterprise platform as a service (PaaS) Origin community distribution
  • PKS = VMWare Tanzu purchase of Pivotal, Heptio (Joe Bada, Craig McLukie), merphe from PCS
  • RKE = Rancher
  • Canonical

  • Rackspace’s Kubernetes as a Service

Helm charts

VIDEO: Helm (helm.sh) is the default package manager for Kubernets (like pip and NuGet). It was started by a company called Deis in October 2015 out of a hackathon.

Helm templating creates yaml.

Helm is further automated with Tilt.

The Illustrated Children’s Guide to Kubernetes by Deis, Inc.

Helm Charts are a collection of templates that can be pulled from a version-controlled Helm repo to define, install, and upgrade complex Kubernetes applications, thus reducing copy-and-paste (and room for error in repetition).

A Helm chart can be used to quickly create an OpenFaaS (Serverless) cluster:

    git clone https://github.com/openfaas/faas-netes && cd faas-netes
       kubectl apply -f ./namespaces.yml 
       kubectl apply -f ./yaml_armhf
       

Videos:

OpenShift routes to services

OpenShift’s Router is instead a HAProxy container (taking the place of NGINX).

HAProxy uses a VRRP (Virtual Router Redundancy Protocol) automatically assigns available Internet Protocol routers to participating hosts.

k8s-openshift-projects-461x277-64498.jpg

Services can be referenced by external clients using a host name such as “hello-svc.mycorp.com” by using OpenShift Enterprise, which uses routes that define the rules the HAProxy applies to incoming connections.

Routes are deployed by an OpenShift Enterprise administrator as routers to nodes in an OpenShift Enterprise cluster. To clarify, the default Router in Openshift is an actual HAProxy container providing reverse proxy capabilities.

Cluster networking

A private ClusterIP is accessible by nodes only within the same cluster.

Services listen on the same nodePort (TCP 30000 - 32767 defined by --service-node-port-range).

k8s-arch-ruo91-797x451-104467

The diagram above is referenced throughout this tutorial, particularly in the Details section below. It is by Yongbok Kim who presents animations on his website.

Communications with outside service network callers occur through a single Virtual IP address (VIP) going through a kube-proxy pod within each node. The Kube-proxy load balances traffic to deployments, which are load-balanced sets of pods within each node. Kube-proxy IPVS Mode is native to the Linux kernel. CBR0 (Custom Bridge zero) forwards the eth0, which rewrites the destination IP to a pod behind the Service3:18 into chapter 6 Big Picture

Kubernetes manages the instantiating, starting, stopping, updating, and deleting of a pre-defined number of pod replicas based on declarations in *.yaml files or interactive commands.

The number of pods replicated is based on deployment yaml files. Service yaml files specify what ports are used in deployments.

k8s-svc-deploy-asso

In 2019 Kubernetes added auto-scaling based on metrics API measurement of demand.

This Architectural Diagram pdf:  k8s-linuxacademy-arch-912x415-32433.jpg is described in the Linux Academy’s CKA course of 5:34:43 hours of videos by Chad Miller (@OpenChad).

Kubernetes Architecture Source: X-Team

PROTIP: To list clusters and switch between them, consider brew installing utilities https://github.com/ahmetb/kubectx and kubens.

kube-ps1.sh creates a shell pod envbin.

Microsoft Draft

Microsoft created Draft (like Scaffold) to simplify getting started in Azure to lift-and-shift Windows ASP.NET apps. It has two commands:

    
       draft create  # helm chart and Dockerfile
       draft up      # deploy

Draft uses language packs for Ruby, C# .NET Core 2.2 with Windows packs, authenticated to Azure Container Registry (ACR) and AKS.


https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/

k create cronjob my-job --image=busybox --schedule="*/1 * * * *" --logger hello

K8s API

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/ (which is one big page):

  • Workloads APIs: Container, Job, CronJob, Deployment, StatefulSet, ReplicaSet, Pod, ReplicationController
  • Service APIs: Endpoints, Ingress, Service
  • Config and storage APIs: ConfigMap, CSIDriver, Secret, StorageClass, Volume
  • Metadata APIs: Controllere, CRD, Event, LimitRange, HPA, …
  • Cluster APIs: APIService, Binding, CSR, ClusterRole, Node, Namespace, Lease, PersistantVolume -> HostPathVolume.

kubectl api-??? grep ???

The aggregation layer lets you install additional Kubernetes-style APIs in your cluster.

ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

Deployments

A Deployment is an API object that manages a replicated application, typically by running Pods with no local state.

  • auth.yaml
  • frontend.yaml
  • hello-green.yaml
  • hello-canary.yaml
  • hello.yaml
  1. Create a yaml file from a command to deploy 3 replica pods:

    kubectl create deployment nginx-lab8 --image=nginx --replicas=3 --dry-run=client -o yaml > lab8.yaml
    
  2. To delete a deployment:

    kubectl delete deployments.app pod mydep ???

Pods

  • monolith.yaml
  • secure-monolith.yaml

  1. Configure “livenessProbe” (in folder health) and “readinessProbe” (in folder readiness) on port 80

    In healthy-monolith-with-probes.yaml

    readinessProbe:
     initialDelaySeconds: 5  # before applying health checks
     timeoutSeconds: 1
     httpGet: 
       path: /
       port: 80
      livenessProbe:
     initialDelaySeconds: 5  # after init/startup before applying probe
     timeoutSeconds: 1
     httpGet: 
       path: /
       port: 80
    
    • ExecAction executes an action inside the container
    • TCPSocketAction checks against the container’s IP address on a specified port
    • HTTPGetAction - HTTP Get request against container

DaemonSets

DaemonSets ensure that all nodes run a copy of a specified pod.

As nodes are added or removed from the cluster, a DaemonSet adds or removes the required pods.

  1. Deleting a DaemonSet removes the pods it manages.

Multi-cloud

Being open-source has enabled Kubernetes to flourish on several clouds*

Google Cloud Qwiklabs

Google Kubernetes Engine (GKE) is a container management SaaS product. GKE runs within the Google Compute Platform (GCP) on top of Google Compute Engine providing machines. GKE in GCP integration provides networking with VPC, monitoring, logging, and CI/CD.

k8s-gcp-738x314-14535

A search for “Kubernetes” within the GCP Console yields:

k8s-gcp-search-656x866-37655

Qwiklabs has several hands-on labs using Kubernetes on Google Cloud

QUEST: Secure Workloads in Google Kubernetes Engine

The 8 labs covering 8 hours of the Kubernetes in the Google Cloud Qwiklab quest

First K8s app

Google Kubernetes Engine (GKE)

kubernetes-pods-599x298-35069

https://google-run.qwiklab.com/focuses/639?parent=catalog

PROTIP: For GKE we disable all legacy authentication, enable RBAC (Role Based Access Control), and enable IAM authentication.

Pods are defined by a manifest file read by the apiserver which deploys nodes.

Pods go into “succeeded” state after being run because pods have short lifespans – deleted and recreated as necessary.

The replication controller automatically adds or removes pods to comply with the specified number of pod replicas declared are running across nodes. This makes GKE “self healing” to provide high availability and reliability with “autoscaling” up and down based on demand.

PROTIP: The virtual reality mobile game Pokemon Go released in 2018 was the largest deployment of GKE at the time.

In this diagram:

  1. List all pods, including in the system namespace:

    
    kubectl get nodes --all-namespaces
    

Amazon AWS ECS & EKS

k8s-aws-kubernauts

Amazon ECS (Elastic Container Service for Kubernetes) is “supercharged” by the
Amazon EKS (Elastic Kubernetes Service), which provides deeper integration into AWS infrastructure (than ECS) for better reliability (at higher cost). Amazon said it runs upstream K8s, not a fork (such as AWS ELasticSearch), so it should be portable to other clouds and on-premises.

ECS is free since Amazon charges for the underlying EC2 instances and related resources for each task ECS runs.

But each EKS cluster costs an additional $144 USD per month (20 cents per hour in the lowest cost us-east-1 region), for EKS to administer a “Control Plane” across Availability Zones.

The diagram (from cloudnaut) illustrates the differences between ECS vs. EKS clusters.

eks-ecs-load-balacing-960x720-32943.png

ECS uses an Application Load Balancer (ALB) to distribute load servicing clients. When EKS was introduced December 2017, it supported only Classic Load Balancer (CLB), with beta support for Application Load Balancer (ALB) or Network Load Balancer (NLB).

Within the cluster, distribution among pods can be random or based on the round robin algorithm.

EKS incurs additional cross-AZ network traffic charges because, to ensure high availability, EKS runs within each node a proxy to distribute traffic in and out of pods across three Kubernetes masters across three Availability Zones. So this additional processing may also require larger instance types, which EKS automatically selects.

Instance type selection is an important consideration because AWS limits the number of IP Addresses per network interface based on instance size, from 2 to a max of 15. Not all AWS EC2 instance types are equipped with the Elastic Network Interface (ENI) that ECS and EKS need to virtually redistribute load among pods. Both ECS and EKS detects and automatically replaces unhealthy masters, provide version upgrades, and automated patching for masters. A secondary private IPv4 network interface is used so that in the event of an instance failure, that interface and/or secondary private IPv4 address can be transferred to a hot standby instance by EKS.

eks-ecs-vpc-eni-960x720-31322

While ECS assigns separate ENI to each ECS task (a group of containers), EKS attaches multiple ENIs per instance, with multiple private IP addresses assigned to each ENI. Since EKS shares network interfaces among pods, a different Security Group cannot be specified to restric a specific pod.

Moreover, network interfaces, multiple private IPv4 addresses, and IPv6 addresses are only available for instances running under a isolated VPC (Virtual Private Cloud) and perhaps with AWS PrivateLink access. So EKS requires AWS VPC. For best isolation (rather than sharing), create a different VPC and Security Group for each cluster.

Both ECS and EKS is accessed from its ECS CLI console and supports ECS API commands and Docker Compose. AWS CloudTrail logging.

Also, EKS leverage IAM authentication, but did not provide out-of-the-box support Task IAM Roles (pods) used to grant access to AWS resources like ECS (AmazonEKSClusterPolicy and AmazonEKSServicePolicy).

For example, to allow containers to access S3, DynamoDB, SQS, or SES at runtime.

Behind the scenes, Amazon used Hashicorp Packer config. scripts to make EKS-optimized AMIs run on Amazon Linux 2. The machines are preconfigured with Docker, kubelet, and the AWS/Heptio AMI Authenticator DaemonSet, plus a EC2 User Data bootstrap script that automatically join an EKS cluster. AMIs that have GPU support are also generated for users who have defined a AWS Marketplace Subscription.

See the EKS Manifest diagram explained by Mark Richman (@mrichman) in his video class, with code at https://github.com/linuxacademy/eks-deep-dive-2019.

PROTIP: My sample.sh installs the utilities and brings up a EKS cluster with one command. It costs $110 per month.

EKS makes use of AWS Fargate Launch Type provides for horizontal scaling on Amazon’s own fleet of EC2 clusters. It’s informally called the “AWS Container Manager”.

Fargate supports “awsvpc” network mode natively so that tasks running on the same instance share that’s instance’s ENI.

“Once you do get your cluster running, there’s nothing to worry about except monitoring performance and, as demand changes, adjusting the scale of your service.” – David Clinton*

This totalcloud.io article compares ECS, EKS, and Fargate.

A concern with Fargate is its time to load.

Microsoft’s Azure Kubernetes Service (AKS)

Other clouds


Other Orchestration systems managing Docker containers

  • OpenShift
  • Kubernetes by Google
  • Centos
  • Atomic
  • Consul, Terraform
  • Serf
  • Cloudify
  • Helios

Competing Orchestration systems

  • Docker Swarm incorporated Rancher from Rancher Labs (#RancherK8s).

    Rancher Kubernetes Engine (RKE) simplifies cluster administration (on EC2, Azure, GCE, Digital Ocean, EKS, AKS, GKE, vSphere or bare metal) - (provisiong, authentication, RBAC, Policy, Security, monitoring, Capacity scaling, Cost control). Its catalog is based on Helm. See Creating an Amazon EC2 Cluster using Rancher.

  • Mesosphere DC/OS (Data Center Operating System) runs Apache Mesos to abstract CPU, memory, storage to provide an API to program a multi-cloud multi-tenant data center (at Twitter, Yelp, Ebay, Azure, Apple, etc.) as if it’s a single pool of resources. Kubernetes can run on top of it, but the DC/OS has premium (licensed) enterprise features. So it’s not for you if you never want to pay for anything.

    Mesos from Apache, which runs other containers in addition to Docker. K8SM is a Mesos Framework developed for Apache Mesos to use Google’s Kubernetes. Installation.

    See Container Orchestration Wars (2017) at the Velocity Conf 19 Jun 2017 by Karl Isenberg (@karlfi) of Mesosphere

  • Hashicorp Nomad is a lighter-weight orchestrator, not just for containers.

  • Red Hat (which IBM bought in 2018) offers its OpenShift to enable Docker and Kubernetes for the enterprise by adding external host names (projects) that add role-based security around namespaces. OpenStack enables running of k8s containers in other clouds or within private data centers.

    OpenShift runs under OKD (Origin Kubernetes Distribution) which include a container and Istio mesh. NOTE: IBM is pushing its “containerd”, its replacement for Docker.

    See https://www.redhat.com/en/technologies/cloud-computing/openshift,


Nodes

Each node has a kubelet, container tooling (Docker), kube-proxy, supervisord.

kube-proxy watches the API server for addition and removal requests. For each new service, kube-proxy opens a randomly chosen port on the local node. It then makes proxied connections to one of the corresponding back-end pods.

The “proxy” in kube-proxy means that it can do simple network stream or round-robin forwarding across a set of backends.

Three modes:

  • User space mode
  • Iptables mode
  • Ipvs mode (alpha as of v1.8)

Kublet

A Kublet agent program is automatically installed in each node created.

Kubelet only manages containers created by the API server - not any container running on the node.

Kublet communicates with the API server to see if pods have been assigned to nodes.

Kubelet takes a set of Podspecs provided bythe kube-apiserver to ensure that containers described are running and healthy.

Kubelet mounts and runs pod volumes and secrets.

Image pull secrets authenticates with private container registries.

Kubelet executes health checks to identify pod/node status.

Service accounts can also store image pull secrets.

Each kubelet manages what is called the control pane which allocates IP addresses and runs nodes under its control.

Kublet constantly compares the status of pods against what is declared in yaml files, and starts or deletes pods as necessary to meet the request.

Restarting Kublet itself depends on the operating system (monit on Debian or systemctl on systemd-based systems).

Master node

Nodes are joined to the master node using the kubeadm join program and command.

The master node runs the kube-apiserver and componenets etcd, controller, scheduler.

The master node itself is crated by the kubeadm init command which establishes folders and invokes the Kubernetes API server. That command is installed along with the kubectl package (pronounced “cube cuddle”). There is a command with the same name used to obtain the version.

  1. View memory and CPU usage of pods across nodes from the K8s Metrics Server:

    kubectl top node
    kubectl top pod

API Server

The kubectl client communicates using REST API calls to an API Server which handles authentication and authorization.

kubectl get apiservices

API’s were initially monolithic but has since been split up into:

  • core “” to handle pod & svc & ep (endpoint)
  • apps to handle deploy, sts, ds
  • authorization to handle role, rb
  • storage to handle pv (persistent volume) and pvc, sc (storage classes)

RBAC (Role-Based Access Control)

Scheduler

The API Server puts nodes in “pending” state when it sends requests to bring them up and down to the Scheduler to do so only when there are enough resources available. The scheduler operate according to a schedule.

perf tunint

Rules obeyed by the Scheduler about pods are called “Tolerances”.

Taints and Tolerations

REF:

KLab: “Node affinity” is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement).

KLab: Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes.

Taint nodes with keyname=value:effect in commands targeting nodes.

Tolerate pods in PodSpec yaml with matching taints.

The Node controller uses built-in taints to specify conditions: “network-unavailable”, “unshedulable”, “cloudprovider unitialized”, “not-ready”, “memory-pressure”, “disk-pressure”, “out-of-disk”,

  1. Use the taint nodes subcommand to specify to the Scheduler a node to repel pods matching the key:

    kubectl taint nodes node1 dedicated=group1:NoSchedule

    More than one taint can be applied to a node.

  2. Remove a taint by a dash after the taint effect:

    kubectl taint nodes node1 key=value:NoSchedule-
  3. Tolerate (ignore taints) in PodSpec yaml spec: to allow (but do not require) certain pods to schedule onto nodes with matching taints.

  tolerations:
  - key: "example-key"
    operator: "Exists"
    effect: "NoSchedule"
   

effect: “PreferNoSchedule” defines a “preference” or “soft” version of NoSchedule – the system will try to avoid placing a pod that does not tolerate the taint on the node, but it is not required.

effect: “NoExecute” causes any pods that do not tolerate the taint to be evicted immediately, and pods that do tolerate the taint will never be evicted.

tolerationSeconds: 3600 optionally added to NoExecute effect dictates how many seconds the pod stays bound to the node after the taint is added. If this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, the pod will not be evicted.

Such details are reaveled using the kubectl describe nodes command.

NOTE: Tolerations are one of a few PodSpec items which can be edited while active, along with containers[].image, initContainers[].image, and activeDeadlineSeconds.

kubectl edit pod pod name

If attempt fails, the file is saved to /tmp/kubectl-edit-ccvrq.yaml

### Extract pod yaml from running podspec

kubectl get pod <pod name> -o yaml > my-new-pod.yaml </pod>

   https://kodekloud.com/courses/kubernetes-certification-course-labs/lectures/12039454




### etcd storage 

   The API Server and Scheduler persists their configuration and status information in a 
   ETCD cluster 
   
   (from CoreOS).
   
   Kubernetes data stored in etcd includes jobs being scheduled, created and deployed, pod/service details and state, namespaces, and replication details.

   It's called a cluster because, for resiliancy, etcd replicates data across nodes. This is why there is a minimum of two worker nodes per cluster.




### eksctl

1. See https://eksctl.io about installing the eksctl CLI tool for creating clusters on EKS. It is written and supported (via Slack) by GitOps vendor weave.works in Go, and uses CloudFormation. 

1. To create a EKS cluster:

   
eksctl create cluster
### HA Proxy cluster For network resiliency, HA Proxy cluster distributes traffic among nodes. Endpoints track the IP addresses of Pods with matching selectors. EndpointSlice groups network endpoints together with Kubernetes resources. ### Node Controllers and Ingress The Node controller assigns a CIDR block to newly registered nodes, then continually monitors node health. When necessary, it taints unhealthy nodes and gracefully evicts unhealthy pods. The default timeout is 40 seconds. Load balancing among nodes (hosts within a cloud) are handled by third-party port forwarding via Ingress controllers. See Ingress definitions. An "Ingress" is a collection of rules that allow inbound connections to reach the cluster services. Ingress Resource defines the connection rules. In Kubernetes the Ingress Controller could be a NGINX container providing reverse proxy capabilities. ### Plug-in Network PROTIP: Kubernetes uses third-party services to handle load balancing and port forwarding through ingress objects managed by an ingress controller. CNI (Container Network Interface) spec An alternative is kubenet Other CNI vendors include Calico, Cilium, Contiv, Weavenet. Flannel on Azure? 1. Find which cni is installed:
ps -ef | grep cni
student   3638  9589  0 23:24 pts/0    00:00:00 grep --color=auto cni
root      9735     1  3 Oct07 ?        00:54:09 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf 
--kubeconfig=/etc/kubernetes/kubelet.conf 
--config=/var/lib/kubelet/config.yaml 
--network-plugin=cni 
--pod-infra-container-image=k8s.gcr.io/pause:3.2
   
1. View cni installer files (to troubleshooting network configuration issues):
sudo more $(sudo find / -name *install-cni* | grep /log/containers)
sudo less /var/log/calico/cni/cni.log sudo less /etc/cni/net.d/calico-kubeconfig

cAdvisor

To collect resource usage and performance characteristics of running containers, many install a pod containing Google’s Container Advisor (cAdvisor). It aggregates and exports telemetry to an InfluxDB database for visualization using Grafana.

Google’s Heapster is also be used to send metrics to Google’s cloud monitoring console.


Containers are declared by yaml such as this to run Alphine Linux Docker container:

apiVersion: v1
kind: Pod
metadata:
  name: alpine
  namespace: default
spec:
  containers:
  - name: alpine
    image: alpine
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
   

Other command:

command:
    - sh
    - "-c"
    - echo Hello Kubernetes! && sleep 3000
    

Nodes Architecture diagram

Yongbok Kim (who writes in Korean) posted (on Jan 24, 2016) a master map of how all the pieces relate to each other:
Click on the diagram to pop-up a full-sized diagram: k8s_details-ruo91-350x448.jpg

BTW What are now called nodes were previously called “minions”, perhaps in deference to NodeJs, which refers to nodes differently.

Klab: Nodes are managed together within each namespace.

Testing K8s

  1. Dry-run

    kubectl create -f pod.yaml --dry-run=client

End-to-end tests by those who develop Kubernetes are coded in Ginko and Gomega (because Kubernets is written in Go).

The Kubtest suite builds, stages, extracts, and brings up the cluster. After testing, it dumps logs and tears down the test rig.


Installation options

There are several ways to obtain a running instance of Kubernetes.

Rancher

Rancher is a deployment tool for Kubernetes that also provides networking and load balancing support. Rancher initially created it’s own framework (called Cattle) to coordinate Docker containers across multiple hosts, at a time when Docker was limited to running on a single host. Now Rancher’s networking provides a consistent solution across a variety of platforms, especially on bare metal or standard (non cloud) virtual servers. In addition to Kubernetes, Rancher enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos upstream project for DCOS (Data Center Operating System). Rancher eventually become part of Docker Swarm.

Within KOPS

Minikube offline

B) Minikube spins up a local environment on your laptop.

NOTE: Ubuntu on LXD offers a 9-instance Kubernetes cluster on localhost.

PROTIP: CAUTION your laptop going to sleep may ruin minikube.

Server install

C) install Kubernetes natively on CentOS.

D) Pull an image from Docker Hub within a Google Compute or AWS cloud instance.

CAUTION: If you are in a large enterprise, confer with your security team before installing. They often have a repository such as Artifactory or Nexus where installers are available after being vetted and perhaps patched for security vulnerabilities.

See https://kubernetes.io/docs/setup/pick-right-solution

### On GCP

  1. On GCP:

    gcloud container clusters get-credentials guestbook2

kubectl get pods –all-namespaces


OS for K8s

As a brainchild of the Linux Founderation, one would expect Kubernetes to run on different flavors of Linux.

CentOS

First, install kubeadm

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
   

Also:

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
   

Ubuntu

  1. On Ubuntu, install:

    apt install -y docker.io
  2. To make sure Docker and Kublet are using the same systemd driver:

    cat <<EOF >/etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
  3. Install the keys:

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  4. sources:

    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    <deb http://apt.kubernetes.io/ kubernetes-xenial main
    <EOF
  5. To download new sources:

    apt update
  6. To download the programs:

    apt install -y kubelet kubeadm kubectl

Architectural Details

This section further explains the architecture diagram above.

This sequence of commands:

  1. Select “CloudNativeKubernetes” sandboxes.
  2. Select the first instance as the “Kube Master”.
  3. Login that server (user/123456).
  4. Change the password as prompted on the Ubuntu 16.04.3 server.

    Deploy Kubernetes master node

  5. Use this command to deploy the master node which controls the other nodes. So it’s deployed first which invokes the API Server

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    kubernetes-nodes-363x120-20150

    The address is the default for Flannel.

    Flow diagram

    k8s-services-flow-847x644-100409

    The diagram above is by Walter Liu

    Flannel for Minikube

    When using Minikube locally, a CNI (Container Network Interface) is needed. So setup Flannel from CoreOS using the open source Tectonic Installer (@TectonicStack). It configures a IPv4 “layer 3” network fabric designed for Kubernetes.

    The response suggests several commands:

  6. Create your .kube folder:

    mkdir -p $HOME/.kube
  7. Copy in a configuration file:

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  8. Give ownership of “501:20”:

    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  9. Make use of CNI:

    sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube.flannel.yml

    The response:

    clusterrole "flannel" created
    clusterrolebinding "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel.cfg" created
    daemonset "kube-flannel.ds" created
    

    ConfigMaps in cfg files are used to define environment variables.

  10. List pods created:

    kubectl get pods --all-namespaces -o wide

    Specifying wide output adds the IP address column

    Included are pods named:

    • api server (aka “master”) accepts kubectl commands
    • etcd (cluster store) for HA (High Availability) in control pane
    • controller to watch for changes and maintain desired state
    • dns (domain name server)
    • proxy load balances across all pods in a service
    • scheduler watches api server for new pods to assign work to new pods

    System administrators control the Master node UI in the cloud or write scripts that invoke kubectl command-line client program that controls the Kubernetes Master node.

    Kubernetes in 5 mins Desired State Management

    Proxy networking

    The Kube Proxy communicates only with Pod admin. whereas Kubelets communicate with individual pods as well.

    Each node has a Flannel and a proxy.

    The Server obtains from Controller Manager ???

  11. Switch to the webpage of servers to Login to the next server.
  12. Be root with sudo -i and provide the password.
  13. Join the node to the master by pasting in the command captured earlier, as root:

    kubeadm join --token ... 172.31.21.55:6443 --discovery-token-ca-cert-hash sha256:...

    Note the above is one long command. So you may need to use a text editor.

    Deployments manage Pods.

    kubernetes-ports-381x155-19677

  14. Switch to the webpage of servers to Login to the 3rd server.
  15. Again Join the node to the master by pasting in the command captured earlier:
  16. Get the list of nodes instantiated:

    kubectl get nodes
  17. To get list of events sorted by timestamp:

    kubectl get events --sort-by='.metadata.creationTimestamp'
  18. Create the initial log file so that Docker mounts a file instead of a directory:

    touch /var/log/kube-appserver.log
    
  19. Create in each node a folder:

    mkdir /srv/kubernetes
    
  20. Missing: Get a utility to generate TLS certificates:

    brew install easyrsa
    
  21. Run it:

    ./easyrsa init-pki
    

    Master IP address

  22. Run it:

    MASTER_IP=172.31.38.152
    echo $MASTER_IP
    
  23. Run it:

    ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`* build-ca nopass
    

    Watchers

    To register watchers on specific nodes.??? Kubernetes supports TLS certifications for encryption over the line.

    REST API CRUD operations are used

    The K8s Admission Controller enables less coding in yaml files by adding what is necssary.

    kubectl details? 
  24. Put in that folder (in each node):

    • basic_auth.csv user and password
    • ca.crt - the certificate authority certificate from pki folder
    • known_tokens.csv kublets use to talk to the apiserver
    • kubecfg.crt - client cert public key
    • kubecfg.key - client cert private key
    • server.cert - server cert public key from issued folder
    • server.key - server cert private key

  25. Copy from API server to each master node:

    
    cp kube-apiserver.yaml  /etc/kubernetes/manifests/
    

    The kublet compares its contents to make it so, uses the manifests folder to create kube-apiserver instances.

  26. For details about each pod:

    
    kubectl describe pods
    

    Expose

    Deploy service

  27. To deploy a service:

    kubectl expose deployment *deployment-name* [options]

Container Storage Interface (CSI)

Configmap

VIDEO: Nana

Use ConfigMaps as environment variables or using a volume mount in a specific namespace.

env:
  - name: SPECIAL_LEVEL_KEY
    valueFrom:
      configMapKeyRef:
        name: special-config
        key: special.how

Within a pod manifest, valueFrom key and the configMapKeyRef value to read the values:

volumes:
  - name: config-volume
  configMap:
    name: special-config

VIDEO: from “Nana’s TechWorld”

Volumes

Docker Containers share attached data volumes available within each Pod:

REMEMBER: Local Volumes defined in pods disappear when the pod dies.

Sample pod yaml definining the volumes mounted within its containers:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
    image: nginx
    volumeMounts:
    - mountPath: "/var/www/html"
      name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: pvc-name
   

For a elastic-app, define several volume types in a container referencing PVC names in awsElasticBlockStore:

spec:
  containers:
  - image: elastic:latest
    name: elastic-container
    ports:
    - containerPort: 9200
    volumeMounts:
    - name: es-persistent-storage
      mountPath: /var/lib/data
    - name: es-secret-dir
      mountPath: /var/lib/secret
    - name: es-config-dir
      mountPath: /var/lib/config
  volumes:
  - name: es-persistent-storage
    persistentVolumeClain:
      claimName: es-pv-claim
  - name: es-secret-dir
    secret:
      secretName: es-secret
  - name: es-config-dir
    configMap:
      name: es-config-map
   

Persistent Volume (PV)

PV’s are a cluster resource, not to a specific _____.

Admins create a Persistent Volume (PV) to provision blocks of storage (of specific Gigabit capacity sizes) for use within a specific cluster.

PV’s are like an external plugin to a cluster.

A complete list in kubernetes.io.

#### Locally to a single Node:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 100 Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - example-node
   

persistentVolumeReclaimPolicy (Recycling) policies are:

  • Delete
  • Retain (keep the contents)
  • Recycle (Scrub the contents).

#### For a NFS (Network File System):

apiVersion: v1
kind: PersistentVolume
metadata:
  pv-name
spec:
  capacity:
    storage: 5 Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.0
  nfs:
    path: /dir/path/on/nfs/server
    server: nfs-server-ip-address
   

On a Google Cloud ext4 type volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: google-cloud-volume
  labels:
    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
spec:
  capacity:
    storage: 400 Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  gcePersistantDisk:
    pdName: my-data-disk
    fsType: ext4
   

Storage Classes

Create persistent volumes dynamically:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-class-name
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "10"
  fsType: ext4
   

REMEMBER: name: storage-class-name must match PVC config storageClassName: storage-class-name

Persistant Volume Claim

A Persistent Volume Claim (PVC) is a request for that storage by a user.

Once granted, a PVC is used as a “claim check” for the storage.

apiVersin: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-name
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: storage-class-name
   

REMEMBER: The metadata: name: in the PVC definition needs to match the Pod’s claimName: pvc-name.

Kubernetes tries to find a PV that matches the capacity: 10Gi with a compatible persistent volume in the cluster.

REMEMBER: name: storage-class-name in pod definition must match PVC config storageClassName: storage-class-name


Deploy StatefulSet components

VIDEO

Stateless apps don’t keep a record of state (such as shopping cart items). Each request is completely new, without regard for what activity occured before. So they can be defined using deployment components.

  • Data passes through NodeJs.
  • Pods are identical and interchangeable,
  • Standard Pods have the same service name.
  • Pods created in random order with random hashes.

Each Stateful app (such as mysql-app) that stores data (updates a database such as MongoDB) about the state of each transaction are defined using Kubernetes StatefulSets (STS) components:

  • Previous State Data (in data replicas) is queried and updated depending on the data state
  • STS Pods are NOT identical. Each pods has a sticky identity, .{governing service domain}
  • STS Pods have individual service names, not interchangeable
  • STS Pods are created in sequence, after success of each Pod, based on a persistent individual identify

Add pods can read. But only Master pods can write.

To ensure each Pod maintains the latest state in local storage, continuous data sync occurs from master to slaves.

DaemonSets

daemonsets (ds)

Usually for system services or other pods that need to physically reside on every node in the cluster, such as for network services. They can also be deployed only to certain nodes using labels and node selectors.

  1. To drain a node out of service temporarily for maintenance:

    kubectl drain node3.mylabserver.com --ignore-daemonsets
  2. To return to service:

    kubectl uncordon node3.mylabserver.com

Sample micro-service apps

Bob Reselman’s 3-day hands-on classes on Kubernetes makes use of bash scripts and sample app at https://github.com/reselbob/CoolWithKube

The repo is based on work from others, especially Kelsy Hightower, the Google Developer Advocate.

  • https://github.com/kelseyhightower/app - an example 12-Factor application.
  • https://hub.docker.com/r/kelseyhightower/monolith - Monolith includes auth and hello services.
  • https://hub.docker.com/r/kelseyhightower/auth - Auth microservice. Generates JWT tokens for authenticated users.
  • https://hub.docker.com/r/kelseyhightower/hello - Hello microservice. Greets authenticated users.
  • https://hub.docker.com/r/ngnix - Frontend to the auth and hello services.

These sample apps are manipulated by https://github.com/kelseyhightower/craft-kubernetes-workshop

  1. Install
  2. Create a Node.js server
  3. Create a Docker container image
  4. Create a container cluster
  5. Create a Kubernetes pod
  6. Scale up your services

  7. Provision a complete Kubernetes cluster using Kubernetes Engine.
  8. Deploy and manage Docker containers using kubectl.
  9. Break an application into microservices using Kubernetes’ Deployments and Services.

This “Kubernetes” folder contains scripts to implement what was described in the “Orchestrating the Cloud with Kubernetes” hands-on lab which is part of the “Kubernetes in the Google Cloud” quest.

Infrastructure as code

  1. Use an internet browser to view

    https://github.com/wilsonmar/DevSecOps/blob/master/Kubernetes/k8s-gcp-hello.sh

    The script downloads a repository forked from googlecodelabs: https://github.com/wilsonmar/orchestrate-with-kubernetes/tree/master/kubernetes

    Declarative

    This repository contains several kinds of .yaml files, which can also have the extension .yml. Kubernetes also recognizes .json files, but YAML files are easier to work with.

    The files are call “Manifests” because they declare the desired state.

  2. Open an internet browser tab to view it.

    reverse proxy to front-end

    The web service consists of a front-end and a proxy served by the NGINX web server configured using two files in the nginx folder:

    • frontend.conf
    • proxy.conf

    These are explained in detail at https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-14-04-droplet

    SSL keys

    SSL keys referenced are installed from the tls folder:

    • ca-key.pem - Certificate Authority’s private key
    • ca.pem - Certificate Authority’s public key
    • cert.pem - public key
    • key.pem - private key

pod.yml manifests

An example (cadvisor):

apiVersion: v1
kind: Pod
metadata:
  name:   cadvisor
spec:
  containers:
    - name: cadvisor
      image: google/cadvisor:v0.22.0
      volumeMounts:
        - name: rootfs
          mountPath: /rootfs
          readOnly: true
        - name: var-run
          mountPath: /var/run
          readOnly: false
        - name: sys
          mountPath: /sys
          readOnly: true
        - name: docker
          mountPath: /var/lib/docker
          readOnly: true
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      args:
        - --profiling
        - --housekeeping_interval=1s
  volumes:
    - name: rootfs
      hostPath:
        path: /
    - name: var-run
      hostPath:
        path: /var/run
    - name: sys
      hostPath:
        path: /sys
    - name: docker
      hostPath:
path: /var/lib/docker
   

Labels and Selectors

Labels are specified for users of Kubernetes

Sample labels and values:

  • release: stable, canary
  • environment: eve, qa, production
  • tier: frontend or backend or cache
  • team: ecommerce, auth, purchasing, marketing
  • author: name
  • maintainer: joe
  • tech-lead: name
  • application-type: ui
  • release-version: 1.0

  1. Create label automatically:

    kubectl expose

  2. Overwrite (Add) a label after a pod created:

    k label po/helloworld app=helloworldapp –overwrite

  3. List labels for a pod created:

    k get pods –show-labels

    … app=helloworldapp

    Selectors

  4. show pods labeled with values matching in list of values:

    k get pods -l ‘release-version in (1.0, 2.0)’

    Label Selectors above select a set of objects using a single statement.

    ”=”, “!=”, IN, NOTIN, EXISTS are valid selectors.

  5. Delete pods

    k delete pods -l application-level=1.0

Replication rc.yml

The rc.yml (Replication Controller) defines the number of replicas and

apiVersion: v1
kind: ReplicationController
metadata:
  name: cadvisor
spec:
  replicas: 5
  selector:
     app hello
  template:
    metadata:
      labels:
        app: hello-world
  spec:
    containers:
    - name: hello
      image: account/image:latest
      ports:
        containerPort: 8080
   
  1. Apply replication:

    
    kubectl apply -f rc.yml
    

    The response expected:

    replicationcontroller "hello" configured
    
  2. List, in wide format, the number of replicated nodes:

    
    kubectl get rc -o wide
    
    DESIRED, CURRENT, READY
    
  3. Get more detail:

    
    kubectl describe rc
    

Service rc.yml

The svc.yml defines the services:

apiVersion: v1
kind: Service
metadata:
  name: hello-svc
    labels:
      app: hello-world
spec:
  type: NodePort
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: hello-world
   

PROTIP: The selector should match the pods.xml.

One type of service is load balancer within a cloud:

apiVersion: v1
kind: Service
metadata:
  name: la-lb-service
spec:
  selector:
    app: la-lb
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9376
  type: LoadBalancer
  clusterIP: 10.0.171.223
  loadBalancerIP: 78.12.23.17
   
  1. To create services:

    
    kubectl create -f svc.yml
    

    The response expected:

    service "hello-svc" created
    
  2. List:

    
    kubectl get svc
    
  3. List details:

    
    kubectl describe svc hello-svc
    
  4. List end points addresses:

    
    kubectl describe ep hello-svc
    

Deploy yml Deployment

The deploy.yml defines the deploy:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
  spec:
    containers:
    - name: nginx
      image: nginx:1.7.9
      ports:
      - containerPort: 80
        protocol: TCP
    nodeSelector:
      net: gigabit
   

Deployment wraps around replica sets, a newer version of doing rolling-update on Replication Controller. Old replica sets can revert roll-back by just changing the deploy.yml file.

PROTIP: Don’t run apt-upgrade within containers, which breaks the image-container relationship controls.

  1. Retrieve the yaml for a deployment:

    kubectl get deployment nginx-deployment -o yaml

    Notice the “RollingUpdateStrategy: 25% max unavilable, 25% max surge”.

    In the yaml, RollingUpdate is part of strategy:

    strategy:
     Rolling Update: 
    
  2. Begin rollout of a new desired version from the command line:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.8

    Alternately, edit the yaml file to nginx:1.9.1 and:

    kubectl apply -f nginx-deployment.yaml
  3. View Rollout a new desired version:

    kubectl rollout status deployment/nginx-deployment
  4. Describe the yaml for a deployment:

    kubectl describe deployment nginx-deployment
  5. List the DESIRED, CURRENT, UP-TO-DATE, AVAILABLE:

    kubectl get deployments 
  6. List the DESIRED, CURRENT, UP-TO-DATE, AVAILABLE:

    kubectl get deployments 
  7. List the history:

    kubectl rollout history deployment/nginx-deployment --revision=3
  8. Backout the revision:

    kubectl rollout undo deployment/nginx-deployment --to-revision=2

    Record Rollback history

    --record=true # to save rollback history obtained by:

    k rollout history deployment/some-deployment
  9. Undo rollout (rollback):

    k rollout undo deployment/my-deployment --revision=v1.2

Security Context

The security.yml defines a secrurity context pod:

apiVersion: v1
kind: Pod
metadata:
  name: security-context.pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  volumess:
  - name: sam-vol
    emptyDir: {}
  containers:
  - name: sample-container
    image: gcr.io/google-samples/node-hello:1.0
    volumeMounts:
    - name: sam-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
   
  1. Create the pod:

    kubectl create -f security.yaml

    This can take several minutes.

  2. Enter the security context:

    kubectl exec -it security-context-pod -- sh
  3. See the users:

    ps aux
  4. See that the group is “2000” as specified:

    cd /data && ls -al
  5. Exit the security context:

    exit
  6. Delete the security context:

    kubectl delete -f security.yaml

Kubelet Daemonset.yaml

Kubelets instantiate pods – each a set of containers running under a single IP address, the fundamental units nodes.

A Kubelet agent program is installed on each server to watch the apiserver and register each node with the cluster.

PROTIP: Use a DaemonSet when running clustered Kubernetes with static pods to run a pod on every node. Static pods are managed directly by the kubelet daemon on a specific node, without the API server observing it.

  • https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected.

Deleting a DaemonSet will clean up the Pods it created. Some typical uses of a DaemonSet are:

  • running a cluster storage daemon, such as glusterd, ceph, on each node.
  • running a logs collection daemon on every node, such as fluentd or logstash.
  • running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia gmond.
  1. Start kubelet daemon:

    
    kubelet --pod-manifest-path=the directory 
    

    This periodically scans the directory and creates/deletes static pods as yaml/json files appear/disappear there.

    Note: Kubelet ignores files starting with a dot when scanning the specified directory.

    PROTIP: By default, Kubelets exposes endpoints on port 10255.

    Containers can be Docker or rkt (pluggable)

    /spec, /healthz reports status.

The container engine pulls images and stopping/starting containers.

  • https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

CNI Plugins

The Controller Network Interface (CNI) is installed using basic cbr0 using the bridge and host-local CNI plugins.

The CNI plugin is selected by passing Kubelet the command-line option:

   --network-plugin=cni 
   

See https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/

Plugin- vxlan L2L3PolEncrypt
Project Calico Y -Y-Y
Calico with Canal Y Y-YY
Flannel Y Y---
Weave Works (Weave Net) Y Y-YY
Romana - -YY-
Kube Router - -YY-
Kopeio Y Y--Y

Others:

  • Cisco ACI
  • Cilium
  • Contiv
  • Contrail
  • NSX-T
  • OpenVswitch

Make your own K8s

Kelsey Hightower, in https://github.com/kelseyhightower/kubernetes-the-hard-way, shows the steps of how to create Compute Engine yourself:

  • Cloud infrastructure firewall and load balancer provisioning
  • setup a CA and TLS cert gen.
  • setup TLS client bootstrap and RBAC authentication
  • bootstrap a HA etcd cluster
  • bootstrap a HA Kubernetes Control Pane
  • Bootstrap Kubernetes Workers
  • Config K8 client for remote access
  • Manage container network routes
  • Deploy clustesr DNS add-on

O’Reilly book Kubernetes adventures on Azure, Part 1 (Linux cluster) Having read several books on Kubernetes, Ivan Fioravanti, writing for Hackernoon, says it’s time to start adventuring in the magical world of Kubernetes for real! And he does so using Microsoft Azure. Enjoy the step-by-step account of his escapade (part 1).

Kubeflow

https://github.com/kubeflow/kubeflow makes deployment of Kubernetes for Machine Learning (TensorFlow) using Kafka

AWS K8s Cluster Auto-scaler

https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md provides deep-dive notes and code.

References

by Adron Hall:

Julia Evans

  • https://jvns.ca/categories/kubernetes/

Drone.io

http://www.nkode.io/2016/10/18/valuable-container-platform-links-kubernetes.html

https://medium.com/@ApsOps/an-illustrated-guide-to-kubernetes-networking-part-1-d1ede3322727

https://cloud.google.com/solutions/heterogeneous-deployment-patterns-with-kubernetes

https://cloud.google.com/solutions/devops/

https://docs.gitlab.com/ee/install/kubernetes/gitlab_omnibus.html

https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html

https://devops.college/the-journey-from-monolith-to-docker-to-kubernetes-part-1-f5dbd730f620

https://github.com/ramitsurana/awesome-kubernetes

Jobs for you

Kubernetes Dominates in IT Job Searches

Learning, Video and Live

Kubernetes for Beginners by Siraj Jan 8, 2019 [11:04]

Kubernetes Deconstructed Dec 15, 2017 [33:14] by Carson Anderson of DOMO (@carsonoid)

Solutions Engineering Hangout: Terraform for Instant K8s Clusters on AWS EKS by HashiCorp

Introduction to Microservices, Docker, and Kubernetes by James Quigley

Kubernetes in Docker for Mac April 17, 2018 by Guillaume Rose, Guillaume Tardif

YOUTUBE: What is Kubernetes? Jun 18, 2018 by Jason Rahm

Kubernetes for Machine Learning

This article talks about Jupyter notebooks correctness and functionality being dependent on their environment, called “training serving skew”. To get around that, use the Binder service which takes Jupyter notebooks within a Git repository to build a container image, then launches the image in a Kubernetes cluster with an exposed route accessible from the public internet.

OpenShift’s Source-to-image (S2I) and Graham Dumpleton’s OpenShift S2I builder builds artifacts from source and injects them into docker images.

It’s used by Seldon-Core to scale Machine Learning environments. There are Seldon-Core Examples

Seldon-Core is used by Kubeflow makes deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. It provides templates and custom resources to deploy TensorFlow and other machine learning libraries and tools on Kubernetes. Included in Kubeflow is JupyterHub to create and manage multi-user interactive Jupyter notebooks. It began as TensorFlow Extended at Google.

https://github.com/kubernetes-incubator is a collection of repositories such as the spartakus Anonymous Usage Collector, metrics-server, external-dns which configures external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services, and kube-aws which is a command-line tool to declaratively manage Kubernetes clusters on AWS.

https://radanalytics.io Oshinko empowers intelligent app developement on the OpenShift platform deploying and managing Apache Spark clusters It has a spark cluster management app (oshinko-webui)

Resources

8 Lightboard VIDEOS: Understanding Kubernetes series by VMware.

https://github.com/hjacobs/kubernetes-failure-stories

Kubstack

Daniel Pacak’s experience with CKAD (from Aqua Security)

@pst418

GCP PODCAST: Kubernetes and Google Container Engine hosts Francesc Campoy Flores and Mark Mandel interview Brian Dorsey, Developer Advocate, Google Cloud Platform. Comments at r/gcppodcast

Microsoft’s “PDF: 50 days from zero to hero with Kubernetes” includes:

  1. Phippy Goes to the Zoo is a children’s book character Phippy (from Docker) introduct pods, replica sets, deployments, ingress.

  2. The 6-part YouTube videos by Brendan Burns drawing behind glass.

  3. Kubernetes core concepts for Azure Kubernetes Service (AKS) explore basic concepts like YAML definitions, networking, secrets, and application deployments from source code.

  4. Katacoda provides a Bash terminal as if you are running Minikube and kubectl locally just by clicking the code on the left pane rather than typing.

  5. Microservices architecture on Azure Kubernetes Service (AKS) describes a reference implementation at https://github.com/mspnp/microservices-reference-implementation

  6. https://aksworkshop.io/ is a hands-on workshop to create a Kubernetes cluster, deploy a microservices-based application, and set up a CI/CD pipeline.

    • Kubernetes deployments, services and ingress
    • Deploying MongoDB using Helm
    • Azure Monitor for Containers, Horizontal Pod Autoscaler and the Cluster Autoscaler
    • Building CI/CD pipelines using Azure DevOps and Azure Container Registry
    • Scaling using Virtual Nodes, setting up SSL/TLS for your deployments, using Azure Key Vault for secrets

  7. https://azure.microsoft.com/en-us/topic/what-is-kubernetes

  8. https://aka.ms/k8slearning

  9. A visual guide on troubleshooting Kubernetes deployments DECEMBER 2019

  10. https://coreos.com/blog/kubectl-tips-and-tricks

    VIDEO from Jun 22, 2017 Covers bash completion

    ambassador pattern to proxy access a database (perhaps charded)*

    Adapter pattern presents a standardized interface across multiple pods, to normalize output logs and monitoring data. Adapts third-party software.

    sidecar pattern

Pod ... Affinity Anti-Affinity
To Pods podAffinity topologySpreadContraints
To Nodes nodeAffinity Taints and Tolerations

A cgroup (control group) is a group of Linux processes with optional resource isolation, accounting, and limits.

Secrets - custom controllers

Pods consume static ConfigMaps and Secrets.

PROTIP: To monitor for changes apply updates to hash in PodSpec, then triggers changes: install custom controller “Wave” at https://github.com/pusher/wave.

What Kubernetes calls its secrets are actually Base64 encoded text.

PROTIP: custom controller turn proxies into Secrets. sealed secrets:

  • Bitnami’s Secret Controller has a key in the Controller used to do asymmetric encrypt and decrypt of external secrets stored in Git.
  • AWS Secrets Manager (ASM)

ServiceAccounts

Debugging

K8s does not come with debuggers. Output to logs, then use tracing. Printlines.

DatadogHQ.com for metrics & traces

unu uses Jaeger for auto-instrumentation

Mindspace.net provides IDE connecting to node remote debugging.

cluster-api.sigs.k8s.io printlines

KubeMonkey is a Chaos Monkey forcing random failures within Kubernetes – to test the fault tolerance of our deployments.

Multi-Container Pods

The kube-scheduler assigns pods to nodes at runtime. Before scheduling, it checks resources, QoS, policies, user specs.

References:

K8s on Raspberry Pi

Read how the legendary Scott Hanselman built Kubernetes on 6 Raspberry Pi nodes, each with a 32GB SD card to a 1GB RAM ARM chip (like on smartphones).

Hansel talked with Alex Ellis (@alexellisuk) keeps his instructions with shell file updated for running on the Pis to install OpenFaaS.

CNCF Ambassador Chris Short developed the rak8s (pronounced rackets) library to make use of Ansible.

Others:

  • https://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/
  • https://blog.sicara.com/build-own-cloud-kubernetes-raspberry-pi-9e5a98741b49

Blogs

https://lnkd.in/f3BciG5

Sandeep Dinesh (@sandeepdinesh) from 2018

  • https://medium.com/google-cloud/kubernetes-best-practices-season-one-11119aee1d10
  • https://www.youtube.com/playlist?list=PLIivdWyY5sqL3xfXz5xJvwzFW_tlQB_GB

Observability

IBM’s Kubernetes 101

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps

  4. Git and GitHub vs File Archival
  5. Git Commands and Statuses
  6. Git Commit, Tag, Push
  7. Git Utilities
  8. Data Security GitHub
  9. GitHub API
  10. TFS vs. GitHub

  11. Choices for DevOps Technologies
  12. Java DevOps Workflow
  13. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  14. AWS server deployment options

  15. Cloud services comparisons (across vendors)
  16. Cloud regions (across vendors)
  17. AWS Virtual Private Cloud

  18. Azure Cloud Onramp
  19. Azure Cloud
  20. Azure Cloud Powershell
  21. Bash Windows using Microsoft’s WSL (Windows Subystem for Linux)

  22. Digital Ocean
  23. Cloud Foundry

  24. Packer automation to build Vagrant images
  25. Terraform multi-cloud provisioning automation
  26. Hashicorp Vault and Consul to generate and hold secrets

  27. Powershell Ecosystem
  28. Powershell on MacOS
  29. Powershell Desired System Configuration

  30. Jenkins Server Setup
  31. Jenkins Plug-ins
  32. Jenkins Freestyle jobs
  33. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  34. Docker (Glossary, Ecosystem, Certification)
  35. Make Makefile for Docker
  36. Docker Setup and run Bash shell script
  37. Bash coding
  38. Docker Setup
  39. Dockerize apps
  40. Docker Registry

  41. Maven on MacOSX

  42. Ansible

  43. MySQL Setup

  44. SonarQube static code scan

  45. API Management Microsoft
  46. API Management Amazon

  47. Scenarios for load