Wilson Mar bio photo

Wilson Mar

Hello!

Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

for orchestration of containers, especially in clouds, including OpenShift

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

I created this to help me to both prepare for Kubernetes exams and to work as an SRE.

The contribution of this is a logical presentation making this complex material easier to understand quickly yet more deeply.

The aim here is to provide insightful commentary around carefully sequenced hands-on activities automated in a shell script – an immersive step-by-step “deep dive” tutorial aimed to make you productive.

WARNING: I’m restructuring this so that revelations about architecture components and flows are based on yaml and what commands reveal rather than as trivia to be memorized. Emphasis on practical commands means less emphasis on architecture trivia.

NOTE: This article is now a “starter set” actively undergoing additions.

Keyword Index Alphabetically

So you can go quickly/directly to terms:

Admission Control, Annotations, APIs, API Server, apply, Auto-scaling, CKAD, Clusters, ClusterRoles, cm=configmaps, Contexts, Controllers, CRD, CronJobs, Declarative, Discovery, ds=DaemonSets, deployment/, ep=endpoints, Environment Variables, Expose, hashes, health checks, Imperative, Init Containers, Ingress, JSONPath, Kubelet, kube-proxy, Labels, LoadBalancer, Logging, Metadata, ns=Namespaces, netpol=NetworkPolicies, no=Nodes, NodePort, OpenShift, po=Pods, Podspecs, Readiness Probes, Liveness Probes, Probes, Persistent Volumes, Port Forwarding, PVC, Replication, rs=ReplicaSets, Rollbacks, Rolling Updates, Secrets, Selectors, svc=Services, sa=ServiceAccounts, Service Discovery, sts=StatefulSets, Storage Classes, Taints, Tolerations, Vim, Volumes, Workloads API


Why Kubernetes?

VIDEO: How Kubernetes Works explained by Brendan Burns (K8s co-founder)

Technologies aside, with Kubernetes, dev teams can take complete control of production operations in cloud environments – deploy both application code and all the environment settings, at their own cadence, without ceremonies and wait time to coordinate releases. Freedom is why it contributes to corporate agility and faster time to market.

Kubernetes applies principles of the Reactive Manifesto of 2014: reactive-manifesto

Kubernetes is called “container orchestration” software because it automates the deployment, scaling, and management of containerized applications[Wikipedia].

  • Authentication -> Authorization -> Admission Control
  • Load balancing
  • Mixed operating systems (Ubuntu, Alpine, etc.)
  • Using images in Docker avoids the “it works on my machine” troubleshooting of setup or dependencies
  • Unlike Elastic Beanstalk, the k8s master controls what each of its nodes do

What Kubernetes contributes

  • Infrastructure as code (IAC)
  • Manage containers
  • Naming and discovery
  • Mounting storage systems
  • Balancing loads
  • Rolling updates
  • Distributing secrets/config
  • Checking application health
  • Monitoring resources
  • Accessing and ingesting logs
  • Replicating application instances
  • Horizontal autoscaling
  • Debugging applications

Open-Source History

Releases of Kubernetes are listed where Kubernetes open-sourced its source code within GitHub.com:

Kubernetes was created inside Google (using the Golang programming language). Kubernetes was used inside Google for over a decade before being open-sourced in 2014 to the Cloud Native Computing Foundation (cncf.io) collective.

  • v1.0 (first commit within GitHub) was on July 2015, and released on July 21, 2015
  • v1.6 was led by a CoreOS developer
  • v1.7 was led by a Googler
  • v1.8 is led by Jaice Singer DuMars (@jaicesd) after Microsoft joined the CNCF July 2017 VIDEO
  • v1.19 is the current version.

Kubernetes is often abbreviated as k8s (pronounced “kate”), with 8 replacing the number of characters between k and s. Thus, https://k8s.io redirects you to the home page for Kubernetes software:

The website, and the Kubernetes code is maintained by the Linux Foundation, which also owns the registered trademark for the logo of a sailing ship’s wheel.

The word “kubernetes” is the ancient Greek word for people who pilot cargo ships – “helmsman” in English. Thus why Kubernetes experts are called “captains”, and why associated products have nautical themes, such as Helm, the package manager for Kubernetes.

kubernetes-logo-125x134-15499.png This blog and podcast revealed that the predecessor to Kubernetes was called “The Borg” becuase initial developers were fans of the “Star Trek Next Generation” TV series. In the series, the “Borg” society subsumes all civilizations it encounters into its “collective”. The logo for Kubernetes inside the 6 sided hexagons representing each Google service has 7 sides. This is because a beloved character in the TV series, played by the curvacious Jeri Ryan, is a converted Borg called “7 of 9”. See Timeline of Kubernetes

landscape.cncf.io


Architectural Components Overview

“Containerized” microservice apps are dockerized into images pulled from DockerHub or private security-vetted images in Docker Enterprise, Quay.io, or an organization’s own binary repository setup using Nexus or Artifactory.

Kubernetes automates resilience by abstacting the network and storage shared by ephemeral replaceable pods which the Kubernetes Controller replicates to increase capacity.

This tutorial focuses on use of Docker containers as Kubernetes’ Container Runtime Interface (CRI), which ensures that every image can be run on every runtime.

VIDEO: Kubernetes only need the Container Runtime from Docker’s Engine, which Kubernetes created a “dockershim” to use Docker’s Container Runtime. Then Docker extracted and gave to CNCF “containerd”.

Kubernetes had worked with rkt (pronounced “rocket”) containers, which provided a CLI for containers as part of CoreOS. Rkt became the first archived project of CNCF after IBM bought Red Hat and its competing cri-o technology used with OpenShift.

Runc is supported by CRI-O, Docker, ContainerD. Runc is the low-level tool which does the “heavy lifting” of spawning a Linux container. (See CVE-2019-5736).

PROTIP: “The median number of containers running on a single host is about 10.” – Sysdig, April 17, 2017. But there can be up to 100 pods per node (at v1.17)

Kubernetes replicates Pods (the same set of containers in each) across several worker Nodes (VM or physical machines).

Production setups have at least 3 nodes per cluster. K8s supports up to 5,000 node clusters of up to 150,000 pods (at v1.17).

Each set of pods are within a node. Kubernetes assigns each node with a different external IP address.

k8s-pod-sharing-324x247 *

Containers within the same pod share the same IP address, hostname, Linux namespaces, cgroups, storage Volumes, and other resources.

Every container has its own unique port number within the pod’s shared IP.

In each pod, Service Mesh Istio architecture has an “Envoy proxy” to facilitate the communictions and retry logic from the business logic containers in its pod.

In the illustration below, each pod (each a different color) encapsulates one or more (Docker) container hosts (operating processes, each shown as a circle):

k8s-container-sets-479x364.jpg


Glossary - how buzzwords fit together

This diagram is shown at the ending of a small (upcoming) movie logically illustrating how the various glossary terms relate with each other:

k8s-docker

Clients interact with the master node (K8s Control Plane) via the kube-apiserver.

etcd is the database within each cluster.

In “Kubernetes Un-Scaried” by Phil Taprogge (of Snyk) offers this diagram:
k8s-phil-diagram


Professional certifications in Kubernetes

Instead of multiple choice questions, K8s exam consists of task-based practical responses while SSH’d into live clusters. Each exam includes one free fail retake.

MOVE: There is support for other languages other than English.

VIDEO: “Hands-on Tips ot Pass the CKAD Exam” from CloudAcademy.

CKAD Exam Domains

Here is the full text of the CNCF’s exam curriculum

13% Core Concepts (APIs, Create and configure basic pods, namespaces)

  • Understand Kubernetes API primitives

18% Configuration (ConfigMaps, SecurityContexts, Resource Requirements, Create & consume Secrets, ServiceAccounts

10% Multi-Container Pods design patterns (e .g. ambassador, adapter, sidecar)

18% Observability (Liveness & Readiness Probes, Container Logging, Metrics server, Monitoring apps, Debugging)

20% Pod Design (Labels, Selectors, Annotations, Deployments, Rolling Updates, Rollbacks, Rollbacks, Jobs, CronJobs)

13% Services & Networking (NetworkPolicies)

08% State Persistence (Volumes, PersistentVolumeClaims) for storage

k8s-ckad-logo-328x311.jpg

CKA Exam Domains

3-hour Certified Kubernetes Administrator (CKA) exams CNCF first announced November 8, 2016.

19% Core Concepts
12% Installation, Configuration & Validation
12% Security
11% Networking
11% Cluster Maintenance
10% Troubleshooting
08% Application Lifecycle Management
07% Storage
05% Scheduling
05% Logging / Monitoring

+https://github.com/walidshaari/Kubernetes-Certified-Administrator lists links by exam domain.

Certificed Kubernauts.io Practioner (CKP)

https://trainings.kubernauts.sh/ describes a certification offered independently by https://kubernauts.de/en/home/ (@kubernauts in Germany) which also provides free namespaces (using Rancher) at https://kubernauts.sh

CKS Exam Domains

Coming November, 2020 (before the KubeCon North America conference): CKS exam is $300 for 2 hours.

It’s for those who hold a CKA certification.

  • 10% Cluster Setup - Best practice for configuration to control environment access, rights, and platform conformity.
  • 15% Cluster Hardening - to protect K8s API and utilize RBAC
  • 15% System Hardening - to improve the security of OS & Network; restrict access through IAM
  • 20% Minimize Microservice Vulnerabilities - to use various mechanisms to isolate, protect, and control workload.
  • 20% Supply Chain Security - forcontainer-oriented security, trusted resources, optimized container images, CVE scanning
  • 20% Monitoring, Logging, and Runtime Security - to analyse and detect threads

DockerDocker (specifically, Docker Engine) provides operating-system-level virtualization in containers.

Whizlabs.com

Known for their sample exams, $99/year on sale from $199 for all courses, by instructors from India. If you want faster video playback, you have to set it for every video. Annoying.

K21Academy

https://k21academy.com/docker-kubernetes/certified-kubernetes-security-specialist-cks-step-by-step-activity-guide-hands-on-lab/ is normally $997, with a 60 day money-back guarantee.


Exam Preparations

CAUTION: Whatever resource you use, ensure it is to the version of Kubernetes (e.g., v1.19 as of 1 Sep 2020).

Sign up for exam

CNCF is part of the Linux Foundation, so…

  1. Get an account (Linux Foundation credentials ) at https://identity.linuxfoundation.org. https://myprofile.linuxfoundation.org/

    NOTE: It’s a non-profit organization, thus the “.org”.

    https://docs.linuxfoundation.org/tc-docs/certification/lf-candidate-handbook

    https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad-cks

    https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad

  2. Login to linuxfoundation.org and join as a member for a $100 discount toward certifications.

  3. Go to https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals and pay for the $300 exam plus $199 more if you want to take their class.

    Alternately, if you have a Registration code: https://trainingportal.linuxfoundation.org/redeem

  4. Find dates and times when you’re in a quiet private indoor place where no one else (co-workers) are near.

    Select a date when your mental and physical are in peak Biorythm

  5. Open a Chrome browser.
  6. Use your Linux Foundation credentials to create an account at

    examslocal.com.

  7. Select the date, your time zone. The website is incredibly slow.
  8. Click the date again in orange. Click the time range.

  9. Install the Chrome extension used to take exams, verified during exam scheduling.

    Click the green “I agree”, then “Confirm Reservation”.

  10. Pick a date when your Biorythms are positive on Intellectual and Physical, not hitting bottom or crossing from positive to negative:

    https://keisan.casio.com/exec/system/1340246447

  11. Sign-in at examslocal.com. For “Sponsor and exam”, type one of the following:

    • Linux Foundation : Certified Kubernetes Application Developer (CKAD) - English
    • Linux Foundation : Certified Kubernetes Administrator (CKA) - English
    • Linux Foundation : Certified Kubernetes Security (CKS) - English ?

    Click on the list, then Click “Next”.

    Click the buttons in the Checklist form and select time of exam until you get all green like this:

    k8s-checklist

    pod-overview Docs and tutorials from Kubernetes.io.

  12. Click “Or Sign In With” tab and select “Sign in for exams powered by the Linux Foundation”.
  13. Log in using your preferred account.
  14. Click “Handbook link” to download it.

    https://trainingportal.linuxfoundation.org/learn/course/certified-kubernetes-application-developer-ckad/exam/exam

  15. PROTIP: You’ll need a corded (Logitech) webcam (not one built-in).

  16. Setup your home computer to take the exam Compatibility Checka using the Chrome extension from “Innovative Exams”, which uses your laptop camera and microphone watching you use a virtual Ubuntu machine.

    Sample exam questions

  17. https://github.com/dgkanatsios/CKAD-exercises by Dimitris-Ilias Gkanatsios (of Microsoft) provides sample exercises to prepare for the CKAD exam.

  18. Practice enough

    Use play environments

  19. Use playground instances to use kubectl commands in a Kubernetes cluster (60 minutes at a time) :

    KodeKloud instances come up the quickest.

    Coursera makes use of Google cloud, requiring copy and pasting of accounts and passwords, bringing up CLI, creating environment variables, etc. every time.

    KataKoda, Red Hat’s OpenShift Playground using its “oc” CLI program. The KataKoda playground environment is pre-loaded with Source-to-Image (S2I) builders for Java (Wildfly), Javascript (Node.JS), Perl, PHP, Python and Ruby. Templates are also available for MariaDB, MongoDB, MySQL, PostgreSQL and Redis.

    Build speed

  20. See 3 preview exam questions (with answer explained) after signing up at https://killer.sh (Killer Shell’s) CKA/CKAD Simulator</a> provides close replica of the CKAD exam browser terminal with 20 CKAD and 25 CKA questions, at 29.99€ for two sessions (before 10% discount). Each session includes 36 hours of access to a cluster environment. They recommend you start the first session when you’re at the beginning of your CKA or CKAD journey.

  21. Practice Keyboard shortcuts for Bash

  22. Get proficient with the vim editor so that commands are intuitive (where you don’t have to pause for remembering how to do things in vim). Use the vimtutor program that usually gets installed when you install the normal vim/gvim package.

    Bookmarks to docs

    You are allowed one browser window: kubernetes.io, so:

  23. PROTIP: Rather than typing from scratch, copy and paste commands from pages in Kubernetes.io.

    Key sections of kubernetes.io are:

    • Documentation
    • Getting started
    • Concepts
    • Tasks
    • Tutorials
    • Reference

    PROTIP: Create bookmarks in Chrome for links to ONLY kubernetes.io pages

    kubernetes-bookmarks

Day before exam

  1. Arrange to sleep well the night before the exam.
  2. If you travel, make sure you are living in the correct time zone.

  3. Move files from your Downloads and Documents folder.
  4. Clear your desk of papers, books. The proctor will be checking.

Before start of exam questions

  1. Take a shower. Put on a comfortable outfit. Brush your teeth. Make your bed.
  2. Eat proteins rather than carbohydrates and sugar before the exam.
  3. Fill a clear bottle with no labels holding clear liquids (water). You’re not allowed to eat snacks.

  4. Put on music that helps you concentrate. Turn it off before starting the test.

  5. Start calm, not rushed. Be setup and be ready a half hour before the scheduled exam.

  6. You may start your exam up to 15 minutes prior to your scheduled appointment time.

  7. Have your ID out and ready to present to the video camera.

  8. The exam takes 180 minutes (3 hours), so before you start, go to the bathroom.
  9. To the proctor, show your ID and pan all the way around the room.

Start of exam

  1. Enter website and click grey “Take Exam” button.

    https://trainingportal.linuxfoundation.org/learn/course/certified-kubernetes-application-developer-ckad/exam/exam

  2. Customize your terminal for productivity.

  3. 19 questions means less than 10 minutes per question. So avoid getting bogged down on the longer complex questions. First go through all the questions to answer the easiest ones first. Along the way, mark ones you want to go back to.

    NOTE: Although there are 19 objectives, not all objectives planned are in every exam.

  4. PROTIP: Avoid writing yaml by scratch.

    PROTIP: Learn to search within kubernetes.io to copy code.

    Generate a declarative yaml file from an imperative command:

  5. Create yaml file as well as pod:

    kubectl create -f file.pod.yaml --record
  6. Paste to the Notpad available during the exam. Save commands there for copy rather than retype.

    k -n pluto get all -o wide
    
  7. Use kubectl explain.

  8. Use help as in kubectl create configmap help .

  9. Run a busybox web server to test retrieval of externally (using wget):

    k run tmp --restart=Never --rm --image=busybox -i -- wget -O- 10.12.2.15
    

    Notice “Never” is title cased.

  10. Do not delete/remove what you have done! People/robots review your servers after the test.

After exam

  1. Create an Acclaim.com account to manage publicity across many certifications.
  2. If you pass the exam (score above 66%), go to acclaim to get your digital badge to post on social media.

    https://trainingportal.linuxfoundation.org/pages/exam-history


Social media communities

Latest videos about K8s

For the most up-to-date information by practioners:

Kubernetes Concepts Explained in 9 minutes! Oct 31, 2019 by Mumshad Mannambeth

Kubcon conferences are held 3 times a year in Asia, Europe, and US from https://events.linuxfoundation.org.

Others:

O’Reilly’s Infrastructure & Ops Superstream Series: Session 3 Oct. 21, 2020: Kubernetes

Interactive KataKoda lab on OReilly.com: Deploying Python APIs on Kubernetes: Deploying a Development Kubernetes Cluster using the slim K3s Kubernetes distribution from Rancher, a Certified Lightweight Kubernetes Distribution built for IoT and Edge remote ecomputing. It stores data using sqlite3 instead of etcd. It bootstrap script K3sup installer at https://github.com/alexellis/k3sup.

arkade - portable Kubernetes marketplace

BOOK: Kubernetes Patterns by Bilgin Ibryam, Roland Huß

Jonathan Johnson

@EllenKorbes: “Successful Kubernetes Development Workflows”

Jonathan Johnson’s live online training “Kubernetes in Three Weeks” courses through O’Reilly:

  • Part I - Meshing and Observability

  • Part II - Operators and Serverless

  • Part III - CI/CD Pipelines on Kubernetes

Programming Kubernetes (book)

Kubernetes Best Practices (book)

Kubernetes Up and Running, second edition (book)

Video courses

Research into learning point to “spaced repetition” as the way to get what want to remember in our long-term memory.

Different instructors explain concepts in different logical sequences.

So looking at different video classes provides that.

KodeKloud also from Udemy.com

PROTIP: This I think is the most thoroughly and logically presented tutorials for CKAD and CKA.

I have several tabs open taking it:

  1. The courses is availble for USD $228/year (less FESTIVERJ20) at KodeKloud.com where Videos are presented on KodeKloud.com (using the Teachable.com platform).

  2. The courses can also be purchased at Udemy.com:

  3. Either way purchased, the course includes access to a KataKoda-powered lab environment for one hour at a time.

    PROTIP: The k alias for kubectl is already configured, so type k instead of kubectl.

  4. A “Quiz Portal” invoked from within the labs UI provides challenge questions and answers.

    Some hints reference answer files in folder “/var/answers”, viewed by a command in the Terminal, such as:

    cat /var/answers/answer-ubuntu-sleeper-2.yaml
  5. Within the quiz, some links to solutions to labs on YouTube are broken. So stay on the Udemy UI for Solution videos.

    KodeKloud’s YouTube channel still provides a series for absolute beginners on Git, Ansible, Puppet, Shell, Docker, Kubernetes. https://www.youtube.com/watch?v=QJ4fODH6DXI

  6. Teacher and founder Mumshad Mannambeth (living in Singapore) also created a free work simulator for people to gain “real” work experience at https://kodekloud.com/p/kodekloud-engineer.

  7. For CKA, he also authored https://github.com/mmumshad/kubernetes-the-hard-way (on Virtualbox and Vagrant using Docker instead of containerd) which takes a manual approach to bootstrap a Kubernetes cluster from scratch, for learning to understand each task performed by the automation. The tutorial adapts the original using GCP developed by Kelsey Hightower.

  8. Join the Slack channel for CKAD and CKA students.

  9. KodeKloud’s Mock Tests, which Ansar (Amoury) Memon’s “The FrontOpsGuys” on YouTube answers for Test 1 and Test 2

For CKA, https://github.com/kodekloudhub/certified-kubernetes-administrator-course

Linux Foundation LFS258

On would think the definitive courses would be from the same organization that created the exam.

The 35-hour video/on-site course LFD259 $199 upgrade offered with the CKAD exam sign-up covers this series of topics:

  1. Course Introduction
  2. Kubernetes Architecture
  3. Build
  4. Design
  5. Deployment Configuration
  6. Security
  7. Exposing Applications
  8. Troubleshooting

LFD459 is the 3-day on-site equivalent course code.

PROTIP: LF class materials ( https://training.linuxfoundation.org/cm/prep) are distributed in .bz2 format which can be opened on macOS by the Unarchiver

I took https://training.linuxfoundation.org/cm/prep/?course=LFS258 but found it to be like “drinking water from a fire hose” in that the 600 page coureware is comprehensive. But exercises during the class are not repeatable after the class.

The Linux Foundation exam focuses only on “pure” Kubernetes commands and excludes add-ons such as OpenStack, Helm, Istio. However, LFD259 covers Istio anyway.

Ready-for.sh establishes the environment:

wget http://bit.ly/LFready -O ready-for.sh
   chmod 755 ready-for.sh
   ./ready-for.sh --help
   # Not for macOS
   

https://github.com/cncf/curriculum - v1.19 contains one-page curriculum pdf’s.

Nana’s TechWorld on YouTube

YouTube channel “Nana’s TechWorld” by entrepreneur Nana Janashia (from Austria) features animated illustrations.

Docker Tutorial for Beginners [Full Course in 3 Hours].

VIDEO intro of Unique Udemy course Logging in Kubernetes with EFK Stack | The Complete Guide covers how to set up K8s clusters from scratch and configure logging with ElasticSearch, Fluentd and Kibana

EdX

edX.org publishes some courses from Linux Academy.

LFS158x: Introduction to Kubernetes

O’Reilly

Certified Kubernetes Application Developer (CKAD) Prep Course July 2019 [4h 53m] uses https://github.com/bmuschko/ckad-study-guide and https://github.com/bmuschko/ckad-crash-course “In-depth and hands-on practice for acing the exam” by Benjamin Muschko (@bmuschko, bmuschko.com, automatedascent.com)

https://github.com/bmuschko/cka-crash-course

7h video class over 3 days live course by Sander van Vugt, who, as a Linux expert, provides in-depth CentOS install advice (including SELinux) and files available nowhere else. His diagrams are on a lightboard.

BLAH: O’Reilly’s videos are annoying because you have to move the sound up on every new chapter.

CloudAcademy

CloudAcademy’s 11-hour “Learning Path” course was updated August 27th, 2019 by Logan Rakai.

Its Playground lab enables you to skip all the install details to build this: k8s-cloudacademy-after

PROTIP: A browser-based session times out too quickly and is cumbersome to copy and paste. So use SSH instead.

Prep standalone SSH client on macOS

  1. Open an SSH client Terminal by pressing command+spacebar for the Spotlight, then type “Terminal” and select “Terminal.app”.
  2. Enter your user password if prompted.
  3. Create a folder “k8s-class”, then navigate into it:

    cd .. && mkdir -p k8s-cloud && cd k8s-cloud
  4. Switch to the CloudAcademy lab page. Automatically launched are four EC2 instances in the “us-west-2b” AWS Availability Zone: The “bastion” exposed to a public internet subnet and, within a private subnet, a “k8s-master” t3.micro and two “k8s-node” t3.small. In about 10 minutes, all instance status reach “running” and Alarm Status “finish loading”.
  5. Click the box to the left of “bastion-host”. When “Connect” changes from gray, click it.
  6. Click the PEM file (such as “554282681613.pem”) and save the file in that folder.
  7. Copy the PEM file name and save to your Clipboard.
  8. Switch to the Terminal.
  9. Construct a variable set command because it’s referenced several times:

    PEMF="554282681613.pem"
  10. Set permissions (so your key is not publicly viewable for SSH to work):

    chmod 400 "$PEMF"
  11. Compose the command to connect to your instance by typing and pasteing its Public DNS: first type “ssh -i”, then paste the pem file, then type “ubuntu@” for the user name inside the host, then switch to the EC2 page to copy and paste the “Public DNS (IPv4)” URL:

    ssh -i "$PEMF" ubuntu@ec2-34-210-196-19.us-west-2.compute.amazonaws.com

    The wizard should automatically detects the key you used to launch the instance. But if the response is: “ubuntu@github.com: Permission denied (publickey).”, try to rename file by:

    mv ~/.ssh/config config.sav
  12. Type yes and press Enter when you see:

    The authenticity of host 'ec2-34-210-196-19.us-west-2.compute.amazonaws.com (34.210.196.19)' can't be established.
    ECDSA key fingerprint is SHA256:sg0jaN4L4RX8ZAxGDo/elIf6HFU+H/3OTG4DALwU5Ik.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? 
    

    You should see a prompt such as:

    ubuntu@ip-10-0-128-5:~$

  13. Customize the Terminal environment for your productivity.

  14. Switch to the CloudAcademy.com page and scroll down to the list of commands. If you customized alias k:

    Using the alias setup above, ensure you can see master and nodes:

    k get nodes
  15. Make use of files at https://github.com/cloudacademy/intro-to-k8s/tree/master/src described by this Intro to Kubernetes course:

    cd src && ls
    10.1-namespace.yaml         5.1-namespace.yaml
    10.2-data_tier_config.yaml  5.2-data_tier.yaml
    10.3-data_tier.yaml         5.3-app_tier.yaml
    10.4-app_tier_secret.yaml   5.4-support_tier.yaml
    10.5-app_tier.yaml          6.1-app_tier_cpu_request.yaml
    1.1-basic_pod.yaml          6.2-autoscale.yaml
    1.2-port_pod.yaml           7.1-namespace.yaml
    1.3-labeled_pod.yaml        7.2-data_tier.yaml
    1.4-resources_pod.yaml      7.3-app_tier.yaml
    2.1-web_service.yaml        8.1-app_tier.yaml
    3.1-namespace.yaml          9.1-namespace.yaml
    3.2-multi_container.yaml    9.2-pv_data_tier.yaml
    4.1-namespace.yaml          9.3-app_tier.yaml
    4.2-data_tier.yaml          9.4-support_tier.yaml
    4.3-app_tier.yaml           metrics-server
    

    PROTIP: Kubernetes is immutable, so rather than changing a runnin pod, delete it and recreate it.

  16. Create and delete pod (all named “mypod”):

    kubectl create -f 1.1-basic_pod.yaml
    kubectl get pods
    kubectl describe pod mypod | more
    kubectl delete po mypod --grace-period=0 --force
    

    PROTIP: --grace-period=0 --force for immediate execution (especially during exam)

  17. Get the “image:” name -internal within the output:

    k describe pod xxx | grep -i image
  18. Get the Node name:

    k get pods -o wide

Pluralsight

PROTIP: Pluralsight videos can be viewed as a Tivo app on my TV. That’s a big plus. No others offer that.

Pluralsight has a 14-hour series of videos on CKAD by Dan Wahlin (@danwahlin, codewithdan.com). Courses in chron order:

export APP_ENV=development
   export DOCKER_ACCT=codewithdan
   

CAUTION: aws v2 CLI became generally available in Feb 2020 shortly after this course was published.

Nigel Poulton (@NigelPoulton, nigelpoulton.com), Docker Captain:

LinkedIn

“Kubernetes Essential Training: Application Development” by Matt Turner (from England) is hands-on using minikube 1.9.2 and kubernetes-cli 1.18.2 on a Mac:

  • Running a local cluster
  • Running containers
  • Viewing logs
  • Remotely executing commands
  • Orchestrating real-world workloads
  • Batch processing with jobs and cron jobs
  • Managing resource usage
  • Keeping containers secure
  • Advanced deployment patterns
  • Analyzing traffic
  • Extending Kubernetes
  • DRY deployment and debugging tools

    The class has quizzes and covers 3rd-party tools such as Helm, Kustomize, kubectl sniff (WireShark), Skaffold, telepresence.

Learning Kubernetes (on a Mac) by Karthik Gaekwad (when he was at Oracle) references files in https://github.com/karthequian/Kubernetes/blob/master/CourseHandout.md.

“DevOps Foundations: Transforming the Enterprise Transforming your organization” by Mirco Hering, Global DevOps Practice Lead at Accenture

LinuxAcademy

The CKAD Troubleshooting class is highly recommended.

Udemy

“Learn Kubernetes” provides a tutorial on yaml.

Udemy.com has a CKAD course with Tests updated 09/2020 with 9.5 hours of video. It includes 30-minute lightning rounds to practice the stress of taking the exam. Surviging this gives you confidence.

“Docker and Kubernetes: The Complete Guide” by Stephen Grider. Diagrams for the 21h video uses draw.io accessing https://github.com/StephenGrider/DockerCasts/tree/master/diagrams

ACloudguru.com

ACloudguru.com CKAD course by William Boyd has 3.5 hours of video organized according to exam domains, 13 hands-on labs, and 3 practice exams based on v1.13.

(ACloud.guru’s Vicky Tanya Seno at Santa Monica College is preparing a course on Kubernetes)

Others on CKAD:

Tips from Tips on preparing for CKAD by Muralidaran Shanmugham

Others on CKA:


Installation

https://redhat-scholars.github.io/kubernetes-tutorial/kubernetes-tutorial/installation.html

Minikube Alternatives

Instead of minikube, there’s also K3s, Microk8s on Linux, Minishift.

  • KinD (Kubernetes in Docker) https://kind.sigs.k8s.io/ builds K8s clusters out of Docker containers running Docker in Docker, good for integration with a CI/CD pipeline. CAUTION: This utility is built for the Kubernetes team’s convenience and thus does not have some convenience features and add-ons.

NOTE: Kubernetes can use alternative container runtimes to run on top of cri-o, such as RedHat’s podman, LXC.

But let’s start by installing minikube on your laptop.

Kustomize templating utility

Kustomize.io provides a kustomize command to create customized raw, template-free YAML (overlay) files for multiple purposes (dev, prod). It leaves the base (original) YAML file untouched and usable as is. For example, dev would have replicas: 1 while pro would have replicas: 5. References:

Some feel Kustomize doesn’t provide enough flexibility and that it results in too many different files for one application.

Alternatives are yq and Jsonnet.

Jsonnet

Jsonnet (pronounced “jay sonnet”) at jsonnet.org (from a 20% project within Google) is a DSL templating language which can generate .json, .conf, .sh, and .ini files. Its Creative Commons-licensed C++ code is at github.com/google/jsonnet.

A faster go-jasonnet is written in Go language and built using Bazel. There’s also Json.NET.

Sample code in this article shows how Jsonnet templating’s ability to extends JSON to “use variables, conditionals, functions, etc. to generate JSON, and feels more like writing JavaScript in some cases than writing a template.”

cyber-jsonnet.venn.svg

“This ticked all our boxes: giving us the repeatability of a templating environment with the power of something closer to a programming language.”

“We combine Jsonnet with ArgoCD to scale our deployments across thousands of microservices.”

Minikube install

REF:

Minikube goes beyond older Docker For Mac (DFM) and Docker for Windows (DFW) and includes a node and a Master when it spins up in a local environment (such as your laptop).

CAUTION: At time of writing, https://github.com/kubernetes/minikubehas 257 issues and 20 pending Pull Requests, but we’re using it anyway. MUST READ: Known Issues with Minikube (Ingress and ingress-dns addons are not supported on Linux)

PROTIP: Minikube makes your mac’s fan fly! Before starting minikube, in command+Spacebar type “Activity Monitor.app” and click to open it. Click the “% CPU” tab label to sort on it. Note the number for process “com.docker.hyperkit”. If the mac’s fan spins constantly: in Docker’s Properties Resources, adjust Memory higher.

Each node in a cluster uses at least 300 MiB of memory.

More about drivers:

  • https://docs.okd.io/latest/minishift/getting-started/setting-up-virtualization-environment.html
  • https://minikube.sigs.k8s.io/docs/drivers/

Minikube on Windows

  1. Start Docker before installing/starting minikube:

    systemctl enable --now docker
  2. Verify your Docker container type:

    docker info --format ''

    On macOS, the response is “Linux”.

    On Windows, (pardoxically) make sure Docker Desktop’s container type setting is Linux and not windows. see docker docs on switching container type.

    See https://minikube.sigs.k8s.io/docs/drivers/hyperv/

Minikube on MacOS using Docker Desktop

Docker Desktop install on macOS

NOTE: Docker drivers do not currently support ARM architecture (only AMD64).

  1. Follow Install Docker for Desktop:

  2. If the Docker Desktop icon appears (it’s already installed), right-click on it and shut it down.

    Then upgrade it:

    brew cask upgrade docker
    

    This automatically installs the HyperKit hypervisor for macOS.

    So there is no need to do what older docs say:

    brew install docker-machine-driver-xhyve
    

    Make sure Docker Desktop is running:

    Install Minikube

  3. I do not recommend using curl to obtain a specific back version of Minikube:

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_1.7.2-0_amd64.deb \
    && sudo dpkg -i minikube_1.7.2-0_amd64.deb
    
  4. Install on a Mac Minikube:

    brew install minikube
    
  5. A lot prints out, to get the caveats about what was installed:

    brew info minikube
    
    minikube: stable 1.15.1 (bottled), HEAD
    Run a Kubernetes cluster locally
    https://minikube.sigs.k8s.io/
    /usr/local/Cellar/minikube/1.15.1 (8 files, 62.4MB) *
      Poured from bottle on 2020-11-22 at 11:46:27
    From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/minikube.rb
    License: Apache-2.0
    ==> Dependencies
    Build: go ✘, go-bindata ✘
    Required: kubernetes-cli ✔
    ==> Options
    --HEAD
         Install HEAD version
    ==> Caveats
    Bash completion has been installed to:
      /usr/local/etc/bash_completion.d
     
    zsh completions have been installed to:
      /usr/local/share/zsh/site-functions
    ==> Analytics
    install: 44,822 (30 days), 110,033 (90 days), 415,969 (365 days)
    install-on-request: 37,280 (30 days), 92,684 (90 days), 342,920 (365 days)
    build-error: 0 (30 days)
    

    There is no need to do what older docs say: Make hyperkit the default driver*:

    minikube config set driver hyperkit
  6. Make sure you’re running the version just installed:

    minikube version

    The result:

    minikube version: v1.15.1
    commit: 23f40a012abb52eff365ff99a709501a61ac5876
    
  7. Installation should have created folder:

    ls $HOME/.minikube

    The result:

    addons              ca.pem              certs               key.pem             profiles
    ca.crt              cache               config              logs                proxy-client-ca.crt
    ca.key              cert.pem            files               machines            proxy-client-ca.key
    
  8. PROTIP: Assign permissions to avoid run error:

    sudo chown -R $USER $HOME/.minikube
    chmod -R u+wrx $HOME/.minikube
    

    No response is expected on success.

    Start Minikube with Docker driver

    PROTIP: If you start minikube with sudo you’ll get:

  9. PROTIP: Define this as an alias to your ~/.desktop_profile:

    alias mk8s="minikube delete;minikube start --driver=docker --memory=4096"

    –memory=1990 can be adjusted per instructions below.

    PROTIP: Before starting minikube, minikube delete to avoid this error message:

    💢  Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "docker" driver, which is incompatible with requested "hyperkit" driver.
    💡  Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=docker'
    

    PROTIP: Don’t use sudo minikube or you’ll get this error message:

    ❌  Exiting due to DRV_AS_ROOT: The "hyperkit" driver should not be used with root privileges.

    Alternately, start within Virtualbox *:

    sudo minikube start --memory=4096

    An example of an expected response:

    😄  minikube v1.15.1 on Darwin 10.15.7
    ✨  Using the docker driver based on user configuration
    👍  Starting control plane node minikube in cluster minikube
    🚜  Pulling base image ...
    💾  Downloading Kubernetes v1.19.4 preload ...
     > preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4: 486.35 MiB
    🔥  Creating docker container (CPUs=2, Memory=1990MB) ...
    🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    🔎  Verifying Kubernetes components...
    🌟  Enabled addons: storage-provisioner
    🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    

    If Docker Desktop is not running, you won’t see the icon at the top of the screen and you’ll get this error:

    🤷  Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: exec: "docker": executable file not found in $PATH
    💡  Suggestion: Install Docker
    📘  Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
    

    An example of a good start:

    🙄  "minikube" profile does not exist, trying anyways.
    💀  Removed all traces of the "minikube" cluster.
    😄  minikube v1.15.1 on Darwin 10.15.7
    ✨  Using the docker driver based on user configuration
    👍  Starting control plane node minikube in cluster minikube
    🔥  Creating docker container (CPUs=2, Memory=1987MB) ...
    🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    🔎  Verifying Kubernetes components...
    🌟  Enabled addons: storage-provisioner, default-storageclass
    🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    

    Alternately, start the minikube service, with add-ons (which each runs in a pod):

    On Mac:

    minikube start ... --addons=dashboard --addons=metrics-server --addons=ingress --addons="ingress-dns"
    

    On Windows:

    minikube start --vm-driver=hyperv
    
  10. To enable services after starting minikube:

    minikube addons enable metrics-server
  11. To see whether the metrics-server is running, or another provider of the resource metrics API (metrics.k8s.io), run the following command:

    kubectl get apiservices

    The response:

    NAME                                   SERVICE   AVAILABLE   AGE
    v1.                                    Local     True        24s
    v1.admissionregistration.k8s.io        Local     True        24s
    v1.apiextensions.k8s.io                Local     True        24s
    v1.apps                                Local     True        24s
    v1.authentication.k8s.io               Local     True        24s
    v1.authorization.k8s.io                Local     True        24s
    v1.autoscaling                         Local     True        24s
    v1.batch                               Local     True        23s
    v1.certificates.k8s.io                 Local     True        23s
    v1.coordination.k8s.io                 Local     True        23s
    v1.events.k8s.io                       Local     True        23s
    v1.networking.k8s.io                   Local     True        23s
    v1.rbac.authorization.k8s.io           Local     True        23s
    v1.scheduling.k8s.io                   Local     True        23s
    v1.storage.k8s.io                      Local     True        23s
    v1beta1.admissionregistration.k8s.io   Local     True        24s
    v1beta1.apiextensions.k8s.io           Local     True        24s
    v1beta1.authentication.k8s.io          Local     True        24s
    v1beta1.authorization.k8s.io           Local     True        24s
    v1beta1.batch                          Local     True        23s
    v1beta1.certificates.k8s.io            Local     True        23s
    v1beta1.coordination.k8s.io            Local     True        23s
    v1beta1.discovery.k8s.io               Local     True        23s
    v1beta1.events.k8s.io                  Local     True        23s
    v1beta1.extensions                     Local     True        23s
    v1beta1.networking.k8s.io              Local     True        23s
    v1beta1.node.k8s.io                    Local     True        23s
    v1beta1.policy                         Local     True        23s
    v1beta1.rbac.authorization.k8s.io      Local     True        23s
    v1beta1.scheduling.k8s.io              Local     True        23s
    v1beta1.storage.k8s.io                 Local     True        23s
    v2beta1.autoscaling                    Local     True        24s
    v2beta2.autoscaling                    Local     True        24s
    

  12. If you plan on doing a lot of work, configure Docker with more memory: The default is 1990MB.

    Click the Docker icon on your Mac, then select “Preferences” then “Resources”:

    k8s-minikube-resources

    TODO: Check how much memory is already being used.

    Slide the appropriate tab to specify a larger number.

    Kubectl CLI install

    NOTE: REF: kubectl CLI (kubernetes-cli) is installed by minikube install.

  13. Install kubectl command:

    sudo apt-get update && sudo apt-get install -y apt-transport-https
    

    kubectl CLI client install

    Kubernetes administrators use kubectl (kube + ctl) the CLI tool running outside Kubernetes servers to control them. It’s automatically installed within Google cloud instances, but on Macs clients:

  14. Install on a Mac:

    brew install kubectl
    
    🍺  /usr/local/Cellar/kubernetes-cli/1.8.3: 108 files, 50.5MB
    1.19.2
    

    It’s required by eksctl and minikube.

  15. Verify the version installed:

    kubectl version --client
    

    At time of writing:

    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
    

    NOTICE that Golang programming is a component.

    If you get this error message:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

Install Docker & Kubernetes on CentOS

  1. Install the the Docker Desktop app

    On CentOS/RHEL 7:

    yum install docker

    On CentOS/RHEL 8, Docker is not installed by default, so there download docker-ce from docker.io:

    https://docks.docker.com/install/linux/docker-ce/centos/

    The Open Container Initiative at https://opencontainers.org defined the image-spec to define how to package contaiiners in a “filesystem bundle” and run them in a container. This ensures comptibility among containers, no matter the originating enviroment.

    Start Minikube within VM

  2. To run minikube within a VM so we will need to use the None (bare-metal) driver. The none driver requires minikube to be run as root, until #3760 can be addressed. To make none the default driver:

    sudo minikube config set vm-driver none
    

    These changes will take effect upon a minikube delete and then a minikube start

    Stop Minikube

  3. Stop the service:

    minikube stop
  4. Recover space:

    minikube delete
    
    🔥  Deleting "minikube" in docker ...
    🔥  Deleting container "minikube" ...
    🔥  Removing /Users/wilson_mar/.minikube/machines/minikube ...
    💀  Removed all traces of the "minikube" cluster.
    

    Since Kubectl 1.8, scale is the preferred way to control graceful delete.

    kubectl scale --replicas=3 deployment nginx-deployment

    Since Kubectl 1.8, rollout and rollback support stateful sets.

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record
    kubectl rollout status deployment.v1.apps/nginx-deployment
    kubectl rollout history deployment nginx-deployment
    
  5. To rollback, undo

    kubectl rollout undo deployments nginx-deployment kubectl rollout history deployment/nginx-deployment –revision=3

  6. To continue, start minikube again.


Configuration

Service cluster IPs and ports are found through Docker –link compatible enviornment variables specifying ports opened by the service proxy.

  1. REMEMBER: Unlike k describe xxx, k cluster-info is a single verb:

    kubectl cluster-info

    Example response:

    Kubernetes master is running at https://127.0.0.1:32768
    KubeDNS is running at https://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
  2. To further debug and diagnose:

    kubectl cluster-info dump

Configure Contexts

  1. Show the current context:

    kubectl config current-context
    

    The expected response on macOS is “minikube”.

  2. To avoid “The connection to the server localhost:8080 was refused”

    https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting

    sudo touch $HOME/.kube/config
    sudo chown $USER $HOME/.kube/config
    chmod 600 $HOME/.kube/config
    

    Deleted the old config from ~/.kube and then restarted docker (for macos) and it rebuilt the config folder.

  3. What is in the Kubernetes configuration file showing configuration settings and current context:

    cat $HOME/.kube/config

    Sample response:

    apiVersion: v1
    clusters:
           - cluster:
     certificate-authority: /Users/wilson_mar/.minikube/ca.crt
     server: https://127.0.0.1:32768
      name: minikube
    contexts:
           - context:
     cluster: minikube
     namespace: default
     user: minikube
      name: minikube
    current-context: minikube
    kind: Config
    preferences: {}
    users:
           - name: minikube
      user:
     client-certificate: /Users/wilson_mar/.minikube/profiles/minikube/client.crt
     client-key: /Users/wilson_mar/.minikube/profiles/minikube/client.key
     

    REMEMBER: When a namespace is not specified in yaml, the name “default” is assumed.

  4. The same JSON as in file ~/.kube/config is displayed by:

    kubectl config view
    

PROTIP: If your server is not up, you’ll see this error message when attempting a kubectl command:

The connection to the server 127.0.0.1:32772 was refused - did you specify the right host or port?

Customize Terminal

  1. Save a few seconds typing:

    Resource Creation Tips for the Kubernetes CKA / CKD Certification Exam by John Tucker

    Setup prompt at left

  2. Setup the prompt so it always appear at the left:

    export PS1="\n  \w\[\033[33m\]\n$ "
    

    Setup k aliase

  3. Setup a shorthand alias so you can type “k” instead of kubectl:

    alias k=kubectl
    complete -F __start_kubectl k
    
  4. Setup alias:

    export do="--dry-run=client -o yaml"

    Bash Autocompletion

  5. Save a few seconds by setting up autocompletion. On bash:

    bash completion
    source <(kubectl completion bash) 
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    

    On ZSH:

    source <(kubectl completion zsh)
    echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc
     

    Vim Editor - indentation

    PROTIP: vim is the only editor available, so learn to search lines in vim (Esc, /, the text to be searched).

    :set shiftwidth=2

    To indent several lines with one command: Esc Shift+V for Visual Line mode, highlight lines, Shift . to shift left, Shift , to shift right.

    VIDEO “Vim crash course”.

K command tips and tricks

Its code page has a summary description of:

    "Production-Grade Container Scheduling and Management"
  1. Specify the kubectl command by itself to list its sub-commands.

  2. Specify the kubectl command with –help get info:

    k completion --help

Declarative Kubernetes Commands

K8s recognizes both imperative and declarative yaml files.

Declarative vs. Declarative

REF:

  • Imperative commands act directly on live objects.

  • Imperative commands provide no track record history.

  • Declarative commands act on yaml files which define objects.

TASK: Create a pod with the ubuntu image to run a container to sleep for 5000 seconds. (Modify file ubuntu-sleeper-2.yaml)

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-2
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    command:
    - “sleep”
    - “5000”
   

The command can also be written as: ???

command: [ "sleep", "5000" ]
   

This references Dockerfile:

ENTRYPOINT ["python", "app.py"]
CMD ["--color", "red"]
   

PROTIP: Names of resources can be up to 253 characters. No underlines (use dashes and dots).

  1. A pod that adds to an emptyDir volume a HTML file every 10 seconds (so you can tell it’s running from a browser):

    apiVersion v1
    kind: pod
    spec:
      volumes:
       -name: html
      containers:
             - name: nginx
     image: nginx:alpine
     volumeMounts:
       - name: html
         mountPath: /usr/share/nginx/html
         readOnly: true
             - name: html-updater
     image: alpine
     command: ["/bin/sh", "-c"]
     args:
       - while true; do date >> /html/index.html;
           sleep 10; done
     volume Mounts:
       - name: html
         mountPath: /html          
    
  2. Socket hostPath Volume which disappears when each pod dies:

    apiVersion v1
    kind: pod
    spec:
      volumes:
     - name: docker-socket
       hostPath:
         path: /var/run/docker.sock
         type: Socket
      containers:
             - name: docker
     image: docker
     command: ["sleep"]
     volume Mounts:
       - name: docker-socket
         mountPath: /var/run/docker.sock       
    

NinaK:

Namespaces provide a scope for names, as a way to divide cluster resources.

PROTIP: Each UUID created (described) by K8s is unique across all namespaces within a cluster.

Namespaces are intended for use in environments with many users spread across multiple teams.

K8s namespaces are used to separate resources (network, files, users, processes, IPCs, etc.) into virtual clusters inside a K8s cluster.

  • Nginx-Ingress controller
  • Database (<a href=#shared-db”>shared mysql-service</a> or mongodb-service)
  • Logging: Elastic stack
  • Monitoring

  • Development
  • Staging
  • Blue/Green production

Namespaces provide isolation among different project teams, so they don’t overwrite each other’s definitions.

Secrets and ConfigMaps are not shared across namespaces.

Different limits on resources (CPU, RAM, storage) can be defined for each namespace.

Thus, separation of different namespaces is useful within large enterprises.

You don’t need to create or think about the default namespace.

  1. Specify a namespace in a command:

    k run nginx –image=nginx

  2. Attach a namespace as the context for all subsequent commands:

    k config set-context –current –namespace=namespace-1

  3. List pods across a namespace across a cluster:

    k get pods –all=namespace

  4. API Resources within a namespace:

    k api-resources –namespaced=true

  5. List where KubeDNS is running:

    Out of the box, without creating anything:

    k get ns
    kubectl get namespaces
    • default holds resources users create without specifying a namespace

    • kube-public contains publically accessible (without auth) ConfigMaps ? which contain cluster info (kubectl cluster-info)

    • kube-system holds k8s internal system processes (master, kubectl, etc.) manages objects created by the system itself (Controllers, ConfigMap, Secrets, Deployments)

    • kube-node-lease holds lease objects containing heartbeats of nodes and the availability of nodes

    • kubernetes-dashboard is created only within minikube.

    Add-on Dashboard

    The Kubernetes dashboard add-on to Kubernetes was originally intended to provide a convenient web-based way for administrators to manage a cluster. In the past, it was backed by a highly privileged kubernetes service account by default.

    The default configuration expose a public interface vulnerable to remote attacks.

    So completely disable the kubernetes dashboard by default.

    Instead of using the Kubernetes dashboard, use the GSP console’s built-in GKE dashboard or Kubectl commands. They provide all the old dashboard’s functionality (and more) without exposing an additional attack service.

  6. Open the Minkube Dashboard server localhost:53764 poped upped on your default browser:

    minikube dashboard
    🔌  Enabling dashboard ...
    🤔  Verifying dashboard health ...
    🚀  Launching proxy ...
    🤔  Verifying proxy health ...
    🎉  Opening http://127.0.0.1:54702/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
    

  7. Escape by pressing ctrl+C.


### Declarative yaml

  1. Declarative yaml to define a new namespace:

    apiVersion: v1   # Object controller version
    kind: Namespace          # Object classification
    metadata:                # Associated data
      labels:
     venue: opera
     watch: cpu
    spec:                    # specific object details
    

    Alternately, imperative commands to define a new namespace:

    kubectl create namespace ticketing
    kubectl label namespace ticketing venue=opera watch=cpu
    kubectl get namespaces
    kubectl get namespace apps-collection -o YAML
    
  2. REMEMBER: List api-resources (not just resources) not bound to a namespace (NOT namespaced) so they can be referenced by named namespaces, such as shared Volumes, nodes:

    k api-resources --namespaced=false
    
  3. On minikube, delete all resources from the default namespace:

    kubectl delete --all pods --namespace=default
    kubectl delete --all deployments --namespace=default
    kubectl delete --all services --namespace=default
    

Kubernetes can manage several namespaces running in each cluster.

“The primary grouping concept in Kubernetes is the namespace. Namespaces are also a way to divide cluster resources between multiple uses. That being said, there is no security between namespaces in Kubernetes; if you are a “user” in a Kubernetes cluster, you can see all the different namespaces and the resources defined in them.” – from the book: OpenShift for Developers, A Guide for Impatient Beginners by Grant Shipley and Graham Dumpleton.

PROTIP: Install https://github.com/ahmetb/kubectx command to switch among clusters. kubens to switch among namespaces. Written in Bash and Go. References:

  • https://computingforgeeks.com/manage-multiple-kubernetes-clusters-with-kubectl-kubectx/

OpenShift project wall namespaces

Red Hat’s OpenShift product adds Projects as “walls” between namespaces, ensuring that users or applications can only see and access what they are allowed to. OpenShift projects wrap a namespace by adding security annotations which control access to that namespace. Access is controlled through an authentication and authorization model based on users and groups.

This diagram illustrates what OpenShift adds: kubernetes-openshift-502x375-107638


Dockerfile to Pod yaml correspondance

k8s-dockerfile-sleep

Imperative one web server:

Klab:

  1. For Docker to create an Nginx web server:+

    docker run --name my-nginx -p 80 nginx:1.19.2

    Pod yaml

  2. For Kubernetes to establish a “naked” pod using the un-deprecated run command (use deployment instead):

    kubectl run my-nginx --port 80 --image=nginx:1.19.2

    Alternately:

    apiVersion: v1
    kind: Pod
    metadata:
      labels:
     app: nginx
    spec:
      containers:
             - name: my-nginx
     image: nginx:1.19.2
     ports:
     - containerPort: 80
    

    NOTE: The pod definition above is defined (with an additional indentation) as a template within deployments.

    PROTIP: Specification of a label in the k run command creates a pod rather than a deployment. So no need to set flag “–restart=Never”.

  3. The opposite is “delete pod x”.

  4. List pods

    k get pods
  5. Copy a specific pod name generated to paste in the command to see its logs:

    kubectl logs pod/pod-name
  6. Output log file to a pod (named “pod-x”):

    k logs pod-x | sudo tee ~/opt/answers/mypod.logs

    TOOL: stern,

    elasticsearch, fluentd, kibana: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elastisearch

    k port-forward service/kibana-logging 5601:5601 –namespace=kube-system

  7. Find all pods that have been started with the kubectl run command: ???

    kubectl get pods nginxpod –show-labels grep run

    kubectl run pod test –image=nginx –dry-run=client -o jasonpath=’{metadata.labels}’

  8. Execute iteractive terminal on a pod with bash installed (most Linux have –bin/sh installed):

    kubectl exec -it pod-name --bin/bash

    Declarative yaml

  9. Generate a declarative yaml file from an imperative command:

    k run redis --image=redis --dry-run=client -o yaml > mypod.yaml
  10. Vi pod.yaml to edit*

    Every K8s yaml file must have these top-level properties:

    apiVersion:
    kind:
    metadata:
    spec:
    
    apiVersion:v1 apps/v1
    kind:Pod
    Servicce
    ReplicaSetDeployment

    kind: abbreviations

    PROTIP: Use abbreviations (in lower case) of basic Kubernetes components to save time typing:

    k get po no svc rs deployment
    abbreviation: pods nodes services replicaset deployment

    REMEMBER: kind: full value must be Title case (first character upper case), singular (not plural).

    REMEMBEER: IRL Admins do not code to work with individual pods, because the whole point of K8s is to automate that chore.

    Admins define abstractions for deployment of images (Docker containers) which define templates (blueprints) for creating pods.

    CRD (Custom Resource Definition) defines a custom/another/new resource kind. It uses apiVersion: apiextensions.k8s.io. (like built-in code for StatefulSets). Improbable.io makes use of crd for its etcdclusters (apiVersion: etcd.improbable.io). For examaple: kubectl tree etcdcluster example

    metadata:

    metadata contains a dictionary indented name: and label:

    spec:

    In spec: is a dictionary item containers: specifying a list/array represented by a dash in front of each item:

      spec:
        containers:
       - name: nginx-containers
         image: nginx
      

    REMEMBER: Under containers:, the dash in front of name is indented.

  11. Create instance by applying yaml -file

    k apply -f mypod.yml
  12. Edit the pod’s yaml file:

    k edit pod mypod.yaml
  13. Extract a declaration yaml file from a running pod:

    k get pod mypod -o yaml > definition.yaml

    But this can be messy because you’ll have to delete all item: lines.

    In vi normal mode, delate 5 lines, including the cursor, 5dd.

  14. A Busybox image contains several apps:

    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox-ready
      namespace: default
    

    kubectl apply makes changes if its subject already exists (the command is declarative?).

    REMEMBER: kubectl create throws an error if the resource already exists, whereas kubectl apply won’t. kubectl create says “create this thing” whereas kubectl apply says “do whatever is necessary (create, update, etc) to make it look like this”.

    The resulting file includes additional annotations.

ArgoCD

Argo CD is a declarative, GitOps Continuous Delivery tool for Kubernetes.

“GitOps” means ArgoCD monitors GitHub and applies changes of declarative yaml to K8s Controllers automatically:

kubectl run

  1. Make an imperative command:

    kubectl run --image=nginx web
    
    pod/web created
    
kubectl get pods
   
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          2m59s
   
  1. Details:

    kubectl describe pod web
    
    Name:         web
    Namespace:    default
    Priority:     0
    Node:         minikube/172.17.0.3
    Start Time:   Sun, 04 Oct 2020 07:02:16 -0600
    Labels:       run=web
    Annotations:  &LP;none>
    Status:       Running
    IP:           172.18.0.3
    IPs:
      IP:  172.18.0.3
    Containers:
      web:
     Container ID:   docker://ecd03de690f64202c6bdf35d4b4192e5af32854d9c77093f31136570507cc600
     Image:          nginx
     Image ID:       docker-pullable://nginx@sha256:c628b67d21744fce822d22fdcc0389f6bd763daac23a6b77147d0712ea7102d0
     Port:           &LP;none>
     Host Port:      &LP;none>
     State:          Running
       Started:      Sun, 04 Oct 2020 07:02:49 -0600
     Ready:          True
     Restart Count:  0
     Environment:    &LP;none>
     Mounts:
       /var/run/secrets/kubernetes.io/serviceaccount from default-token-72hc5 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             True 
      ContainersReady   True 
      PodScheduled      True 
    Volumes:
      default-token-72hc5:
     Type:        Secret (a volume populated by a Secret)
     SecretName:  default-token-72hc5
     Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  &LP;none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                  node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type    Reason     Age    From               Message
      ----    ------     ----   ----               -------
      Normal  Scheduled  4m40s  default-scheduler  Successfully assigned default/web to minikube
      Normal  Pulling    4m39s  kubelet, minikube  Pulling image "nginx"
      Normal  Pulled     4m7s   kubelet, minikube  Successfully pulled image "nginx" in 31.950535327s
      Normal  Created    4m7s   kubelet, minikube  Created container web
      Normal  Started    4m7s   kubelet, minikube  Started container web
    

Multi-Container Pods

The kube-scheduler assigns pods to nodes at runtime. Before scheduling, it checks resources, QoS, policies, user specs.

This needs application executables to be designed and built as microservices (independent, small, reuseable code) instead of a monalith.

Containers within each pod share the same lifecycle.

Several containers: the webapp, log-agent, Istio, etc.

Patterns:

The ambassador pattern is to proxy in front of accessing a database (perhaps charded)*

One use case is to make consistent the format of dates sent to the database.

Another use case is to route requests to one of several databases (dev/test/prod).

The Adapter pattern presents a standardized interface across multiple pods, to normalize output logs and monitoring data. Adapts third-party software.

The Sidecar pattern

Pod ... Affinity Anti-Affinity
To Pods podAffinity topologySpreadContraints
To Nodes nodeAffinity Taints and Tolerations

Controller objects

Because Deployments provide a helpful “front end” to ReplicaSets, training focuses on Deployments.

Deploy Replicas for Replication, Rolling Updates

A ReplicaSet controller ensures that a population of Pods, all identical to one another, are running at the same time.

Deployments manage their own ReplicaSets to achieve the declarative goals you prescribe, so you will most commonly work with Deployment objects.

k8s-deployment-rs-1568x584

(The ReplicaSet process replaces the older ReplicationController.)

ReplicaSets enable deployment of several pods, and check their status as a single unit (replicas).

This enables Load Balancing across several machines for more capacity, redunancy, and rolling updates without downtime.

ReplicaSets monitor the number of pods and create pods to match the number of replicas for the label type requested in the yaml.

The sample ReplicaSet.yml file:

apiVersion: v1
kind: ReplicaSet
metadata:
  name: my-app
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
        pec:
      containers:
      - name: nginx-container
        image: nginx:1.19.2
        ports:
        - containerPort: 80
replicas: 3
selector: 
  matchLabels:
    type: front-end
   

A selector is required within ReplicaSet yaml.

PROTIP: The spec: template: is copied from a pod definition yaml, then indented.

PROTIP: Indent paste using vi

Deployments let you do declarative updates to ReplicaSets and Pods.

kubectl run deployment-name \
   --image [IMAGE]:[TAB] \
   --replicas 3 \
   --labels [KEY]=[VALUE] \
   --port 8080 \
   --generator deployment/apps.va \
   --save-config
   

Deployments let you create, update, roll back, and scale Pods, using ReplicaSets as needed. For example, when you perform a rolling upgrade of a Deployment, the Deployment object creates a second ReplicaSet, and then increases the number of Pods in the new ReplicaSet as it decreases the number of Pods in its original ReplicaSet.

Replication Controllers perform a similar role to the combination of ReplicaSets and Deployments, but their use is no longer recommended.

If you need to deploy applications that maintain local state, StatefulSet is a better option. A StatefulSet is similar to a Deployment in that the Pods use the same container spec. The Pods created through Deployment are not given persistent identities, however; by contrast, Pods created using StatefulSet have unique persistent identities with stable network identity and persistent disk storage.

If you need to run certain Pods on all the nodes within the cluster or on a selection of nodes, use DaemonSet. DaemonSet ensures that a specific Pod is always running on all or some subset of the nodes. If new nodes are added, DaemonSet will automatically set up Pods in those nodes with the required specification. The word “daemon” is a computer science term meaning a non-interactive process that provides useful services to other processes. A Kubernetes cluster might use a DaemonSet to ensure that a logging agent like fluentd is running on all nodes in the cluster.

  1. PROTIP: Remember the “.apps” when listing replicasets:

    k get replicasets.apps
  2. Identify the image:

    k describe replicasets.apps replicaset-1  | grep -i image:

    Modify replicas to scale

    • Edit the file, then
      k replace -f replicaset-def.yaml

    REMEMBER: several formats don’t modify the file:

    • k scale –relicas=6 -f replicaset-def.yaml

    • k scale –replicas=6 replicaset myapp-replicaset

    • Scale based on load

Practice test with quiz about pod commands: https://kodekloud.com/courses/kubernetes-certification-course-labs/lectures/12039431

Deployments

To upgrade gradually in a production environment without downtime, do a rolling update.

Deployments make use of Replicasets.

kubectl run --restart=Always      # creates deployment
kubectl run --restart=Never       # creates pod
kubectl run --restart=OnFailure   # creates job
   

To perform an upgrade, the Deployment object will create a second ReplicaSet object, and then increase the number of (upgraded) Pods in the second ReplicaSet while it decreases the number in the first ReplicaSet.

  1. List deployments, different ways:

    k get deployment
    k get deployments
    k get deployment.app
    k get deployments.app
    

Practice test with quiz about deployments: https://kodekloud.com/courses/kubernetes-certification-course-labs/lectures/12039434

DaemonSets

DaemonSets ensure that all nodes run a copy of a specified pod.

As nodes are added or removed from the cluster, a DaemonSet adds or removes the required pods.

  1. Deleting a DaemonSet removes the pods it manages.

svc = Services

VIDEO: Nina

Services provide an un-changing IP address to pods in the back-end.

PROTIP: Services are defined with a port.

Internal services are only reachable within a cluster.

Types of services:

  • ClusterIP exposes only inside the cluster
  • NodePort exposes a port through the node to the world
  • LodBalancer exposes the service externally using a cloud provider’s load

REMEMBER: Port numbers in deployment yaml must match port numbers in services yaml.

Example yaml of services:

  • auth.yaml
  • frontend.yaml
  • hello-blue.yaml
  • hello-green.yaml
  • hello.yaml
  • monolith.yaml

sa = ServiceAccounts

  1. Administrator: Create a new service account named “backend-team”:

    kubectl create serviceaccount backend-team
  2. Define the service 2.1-web_service.yaml:

    spec:
      selector:
     app: nginx
      ports:
             - protocol: TCP
     port: 80
     targetPort: 8080
    
  3. Verify visibility using curl:

    kubectl create -f 2.1-web_service.yaml
    kubectl get services
    kubectl describe service webserver  # copy IP: value 10.108.171.76
    kubectl describe nodes | grep -i address -A 1
    curl 10.0.0.100:3#### (replace #### with the actual port digits)
    

    PROTIP: A secret is assigned automatically each service.

  4. To show all components in a mongodb app:

    kubectl get all | grep mongodb 

    Expose,

    Expose service within deployment

    PROTIP: External services are exposed by Endpoints: (NodePoints).

    https://kubernetes.io/docs/reference/generated

    k expose deployment deployment --port=6379 -n namespace --name=service-name

    LoadBalancer

    One type of service is a LoadBalancer with a external IP extended from nodePort service which extends an ClusterIP :

    apiVersion: v1
    kind: Service
    metadata:
      name: la-lb-service
    spec:
      type: LoadBalancer
      sessionAffinity: ClientIP
      selector:
     app: la-lb
      ports:
             - protocol: TCP
     port: 3200  # clusterIP
     targetPort: 3000
     nodePort: 30010
      clusterIP: 10.0.171.223
      loadBalancerIP: 78.12.23.17
    

    sessionAffinity: ClientIP to ensure that each client’s first request to determine which Pod will be used for all subsequent connections, when switching versions mid-transaction can cause issues.

    Notice static IP addresses are being specified here. Is that a good thing?

    k get svc

    The LoadBalancer type service assigns an EXTERNAL-IP address which accepts external requests.

    Service Discovery

  5. cat /etc/resolve.conf

    search default.svc.cluster.local  svc.cluster.local  cluster.local
    nameserver 10.96.0.10
    options ndots:5
  6. List the URL:

    minkube service mongo-extress-service

    To text, create a database.

    ConfigMaps

    DEFINITION: ConfigMap is an API object used to store non-confidential data in key-value pairs.

    Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

    shared mysql-service yaml ConfigMap

  7. Define a commonly used ConfigMap within a service named “database”:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysel-configmap
    data: 
      db_url: mysel-service.database
    

    REMEMBER: “.database” above references the namespace. [1:15:17]

  8. View

    k get configmap -n my-namespace

Kind: Job

Batch jobs are supervisor processes that run once and immediately completed. The Job controller within the Control Plane creates one or more Pods required to run a task.

spec: completions: 5 defines the number of pods started within a job.

spec: parallelism: 1 defines the number of pods running at the same time.

When the task is completed, the Job terminates.

3 types of jobs:

  • completions=1 & parallelism=1 for non-parallel: one pod is started
  • completions=n & parallelism=m for n fixed completions in parallel
  • completions=1 & parallelism=m for n jobs work queue started until 1 completed (rarely used)

If a node fails while a Job is executing on that node, Kubernetes will restart the Job on a node that is still running.

To fail jobs that don’t finish within a set number seconds:

This example-job.yaml uses perl language built-in command to compute the value of Pi to 2,000 places and then prints the result:

apiVersion: batch/v1
kind: Job
metadata:
  # Unique key of the Job instance
  name: example-job
spec:
  template:
    metadata:
      name: example-job
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl"]
        args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
      # Do not restart containers after they exit
      restartPolicy: Never
   

kubectl apply -f example-job.yaml

  1. For the job’s start time and success status, describe the job

    Start Time:     Thu, 20 Dec 2018 14:34:09 +0000
    Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
    
  2. Additional conditions:

    ...
    spec:
      backoffLimit: 4
      activeDeadlineSeconds: 300
      template:
      ...
    
  3. Delete job after finish:

    ttlSecondsAfterFinished: 20
  4. When a job is completed, the Job terminates Pods used unless:

    kubectl delete job job-name --cascade false 
  5. After running, check the status of jobs

    kubectl get jobs 
    NAME     COMPLETIONS   DURATION   AGE
    somejob   5/5           27s        9m41s
    

Kind: Cronjob

  1. When a job is complete, view results in logs:

    kubectl logs pod-name

    The API Server authenticates using one of several methods (basic, certificates, tokens, etc.).

    “Authorization” refers to determining whether the requester is allowed to perform based on role (using RBAC).

    The API Server routes several kinds of yaml declaration files: Pod, Deployment of pods, Service, Job, Configmap.

The CronJob controller runs Pods on a time-based schedule like Linux cron uses (minute, hour, day, month, day of week 0-6)

  1. apply example-cronjob.yaml with batch apiVersion and kind: Cronjob, with a schedule spec:

    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: hello
    spec:
      schedule: "*/1 * * * *"
      jobTemplate:
     spec:
       template:
         spec:
           containers:
           - name: hello
             image: busybox
             args:
             - /bin/sh
             - -c
             - date; echo "Hello, World!"
           restartPolicy: OnFailure
    

Additional:

...
spec:
  schedule: "*/1 * * * *"
  startingDeadlineSeconds: 3600    # to stop repeated unsuccessful attempts to start
  concurrencyPolicy: Forbid        # or replace existing concurrent jobs
  suspend: True
  successfulJobsHistoryLimit: 3    # retained in history
  failedJobsHistoryLimit: 1        # 
  jobtemplate:
    ...
   
  1. Add-on for jetbrains:

    https://plugins.jetbrains.com/plugin/10485-kubernetes


Misc. List:

kubectl get -n kube-system serviceaccounts

QUESTON: Create a Cron job that will run ???

Podspecs

Podspecs are yaml files that describe a pod.

apiVersion: v1
kind: Pod
metadata:
  name: busybox-ready
  namespace: default
   

Deleting Pods

k delete pod frontend --grace-period=0 --force

Plug-in manager

  1. Like apt-get, but for use within Kubernetes:

    kubectl krew install tree

    From the krew-index plug repository on the internet.

  2. For a deployment, list its Pods within ReplicaSet:

    kubectl tree deployment ???

Add-ons to Kubernetes

Kubernetes is a platform used for building platforms such as OpenShift, Helm, EKS, CrossPlane.

helm install -name prometheus stable/prometheus-operator

k port-forward service/prometheus-grafana 9091:80

https://github.com/Albertoimpl/k8s-for-the-busy by Alberto C. Rios (@albertoimpl)

https://github.com/ojhughes/k8s-for-the-busy-java-developer by Ollie Hughes (@olliehughes82)


Cloud Kubernetes Services

Each offering has its own acronym (where KS = Kubernetes Service):

  • ACK = Alibaba Cloud Kubernetes

  • ECS = Elastic (AWS) Container Service
  • EKS = Elastic
  • GKS = Google
  • IKS = IBM cloud

  • DOKS = Digital Ocean
  • OKS = Oracle
  • PKE = Bonzai
  • MKE = D2iQ (Day two iQ) rebranded from Mesos DC/OS meta clusters
  • OKD = OpenShift (Red Hat) Enterprise platform as a service (PaaS) Origin community distribution
  • PKS = VMWare Tanzu purchase of Pivotal, Heptio (Joe Bada, Craig McLukie), merphe from PCS
  • RKE = Rancher
  • Canonical

  • Rackspace’s Kubernetes as a Service

Helm charts

VIDEO: Helm (helm.sh) is the default package manager for Kubernets (like pip and NuGet). It was started by a company called Deis in October 2015 out of a hackathon.

Helm templating creates yaml.

Helm is further automated with Tilt.

The Illustrated Children’s Guide to Kubernetes by Deis, Inc.

Helm Charts are a collection of templates that can be pulled from a version-controlled Helm repo to define, install, and upgrade complex Kubernetes applications, thus reducing copy-and-paste (and room for error in repetition).

A Helm chart can be used to quickly create an OpenFaaS (Serverless) cluster:

    git clone https://github.com/openfaas/faas-netes && cd faas-netes
       kubectl apply -f ./namespaces.yml 
       kubectl apply -f ./yaml_armhf
       

Videos:

OpenShift routes to services

OpenShift’s Router is instead a HAProxy container (taking the place of NGINX).

HAProxy uses a VRRP (Virtual Router Redundancy Protocol) automatically assigns available Internet Protocol routers to participating hosts.

k8s-openshift-projects-461x277-64498.jpg

Services can be referenced by external clients using a host name such as “hello-svc.mycorp.com” by using OpenShift Enterprise, which uses routes that define the rules the HAProxy applies to incoming connections.

Routes are deployed by an OpenShift Enterprise administrator as routers to nodes in an OpenShift Enterprise cluster. To clarify, the default Router in Openshift is an actual HAProxy container providing reverse proxy capabilities.

netpol = NetworkPolicies

HA Proxy cluster

For network resiliency, HA Proxy cluster distributes traffic among nodes.

Endpoints track the IP addresses of Pods with matching selectors.

EndpointSlice groups network endpoints together with Kubernetes resources.

Cluster networking

A private ClusterIP is accessible by nodes only within the same cluster.

Services listen on the same nodePort (TCP 30000 - 32767 defined by --service-node-port-range).

k8s-arch-ruo91-797x451-104467

The diagram above is referenced throughout this tutorial, particularly in the Details section below. It is by Yongbok Kim who presents animations on his website.

Communications with outside service network callers occur through a single Virtual IP address (VIP) going through a kube-proxy pod within each node. The Kube-proxy load balances traffic to deployments, which are load-balanced sets of pods within each node. Kube-proxy IPVS Mode is native to the Linux kernel. CBR0 (Custom Bridge zero) forwards the eth0, which rewrites the destination IP to a pod behind the Service3:18 into chapter 6 Big Picture

Kubernetes manages the instantiating, starting, stopping, updating, and deleting of a pre-defined number of pod replicas based on declarations in *.yaml files or interactive commands.

The number of pods replicated is based on deployment yaml files. Service yaml files specify what ports are used in deployments.

k8s-svc-deploy-asso

In 2019 Kubernetes added auto-scaling based on metrics API measurement of demand.

This Architectural Diagram pdf:  k8s-linuxacademy-arch-912x415-32433.jpg is described in the Linux Academy’s CKA course of 5:34:43 hours of videos by Chad Miller (@OpenChad).

Kubernetes Architecture Source: X-Team

PROTIP: To list clusters and switch between them, consider brew installing utilities https://github.com/ahmetb/kubectx and kubens.

kube-ps1.sh creates a shell pod envbin.


https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/

k create cronjob my-job --image=busybox --schedule="*/1 * * * *" --logger hello

K8s API

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/ (which is one big page):

  • Workloads APIs: Container, Job, CronJob, Deployment, StatefulSet, ReplicaSet, Pod, ReplicationController
  • Service APIs: Endpoints, Ingress, Service
  • Config and storage APIs: ConfigMap, CSIDriver, Secret, StorageClass, Volume
  • Metadata APIs: Controller, CRD, Event, LimitRange, HPA (HorizontalPodAutoscaler), PodDistributionBudget, …
  • Cluster APIs: APIService, Binding, CSR, ClusterRole, Node, Namespace, Lease, PersistantVolume -> HostPathVolume.

The aggregation layer lets you install additional Kubernetes-style APIs in your cluster.

Deployments

A Deployment is an API object that manages a replicated application, typically by running Pods with no local state.

  • auth.yaml
  • frontend.yaml
  • hello-green.yaml
  • hello-canary.yaml
  • hello.yaml
  1. Create a yaml file from a command to deploy 3 replica pods:

    kubectl create deployment nginx-lab8 --image=nginx --replicas=3 --dry-run=client -o yaml > lab8.yaml
    
  2. To delete a deployment:

    kubectl delete deployments.app pod mydep ???

Health Checks

Probes

Accept traffic? readinessProbe actuator/health
Restart the container? livenessProbe actuator/info

  1. Configure “livenessProbe” (in folder health) and

    “readinessProbe” (in folder readiness) on port 80

    In healthy-monolith-with-probes.yaml

    ...
      livenessProbe:
     httpGet: 
       path: "/actuator/info"
       port: 8080
     failureThreshold: 3      # Default is 3
     successThreshold: 1
     initialDelaySeconds: 5   # after init/startup before applying probe
     periodSeconds: 30        # Default is 10
     timeoutSeconds: 10       # Default is 1
      readinessProbe:
     httpGet: 
       path: "/actuator/health"
       port: 8080
     failureThreshold: 3      # Default is 3
     successThreshold: 1     
     initialDelaySeconds: 15  # before applying health checks
     periodSeconds: 30        # Default is 10
     timeoutSeconds: 10  # Needed?
    
    • ExecAction executes an action inside the container
    • TCPSocketAction checks against the container’s IP address on a specified port
    • HTTPGetAction - HTTP Get request against container

Alternately:

 httpGet: 
      path: "/index.html"
      port: 80
   

Probes with Dekorate

...
  startupProbe:
    httpGet: 
      path: "/healthz"
      port: liveness-port
    failureThreshold: 30
    periodSeconds: 10
   

Skaffold

Oketeto

Logging

  1. Get pod name

    kubectl get pods

  2. List log entries for pod:

    kubectl logs -f POD NAME HERE event-simulator

PROTIP: To display the tail end of logs for containers and multiple pods (rather than scrolling through an entire log), install stern at https://github.com/wercker/stern/tree/master/stern. It’s from Wercker (which was acquired by Oracle in 2017). BTW, on a ship stern is the tail end. Install from https://github.com/wercker/stern/releases


Multi-cloud

Being open-source has enabled Kubernetes to flourish on several clouds*

PROTIP: Cloud SaaS provide a GUI that presents clickable specifications to avoid mis-typing obscure keywords.

However, the complexity of configurations means that you have to learn how to visually navigate the GUI menus and forms.

Kubernetes in the cloud also enables multi-region setups. GCP has –horizontal-pod-autoscaler-downscal-stabilization to provide a wait period (5 minutes) before another scale-down action*

GKS

Google’s Kubernetes Service offers KTD (Kubnetes Threat Detection). On each node a KTD daemonset that collects, interprets, and annotates signals for a back-end KTD Detection Plane that uses Machine Learning to make findings for the Google SCC (Security Command Center) and Cloud Logging:

k8s-ktd

Google’s approach enables detection of broad, new classes of infection such as forclosing reverse shells (phoning home).

IBM CloudLabs

https://www.youtube.com/watch?v=aSrqRSk43lY&list=PLOspHqNVtKABAVX4azqPIu6UfsPzSu2YN&index=2

Equinix Metal, orion-equinix

https://inlets.dev/blog/2020/12/15/multi-cluster-monitoring.html

Google Cloud GKE GCE Qwiklabs

Google Kubernetes Engine (GKE) is Google’s container management SaaS offering.

GKE runs within the Google Compute Platform (GCP) on top of Google Compute Engine (GCE) providing machines.

GKE provides networking within VPC, monitoring, logging, and CI/CD (Google Build).

k8s-gcp-738x314-14535

A search for “Kubernetes” within the GCP Console yields:

k8s-gcp-search-656x866-37655

30 days free training instances after completing a Tour class. Qwiklabs has several hands-on labs using Kubernetes on Google Cloud. Its labs are used in Coursera courses, which explains provides lab solutions videos such as

Qwiklabs QUEST: Secure Workloads in Google Kubernetes Engine consists of 8 labs covering 8 hours of the Kubernetes in the Google Cloud Qwiklab quest

Deploying Google Kubernetes Engine Clusters from Cloud Shell

First K8s app

  1. In the Google Cloud Console, on the Navigation menu, in the Dashboard, click “Go to APIs Overview.
  2. Confirm Country and Terms of Service, then click “AGREE AND CONTINUE”.
  3. Click to expand APIs & Services. Click “+ ENABLE APIS AND SERVICES”.
  4. In the Search for APIs & Services box, enter “Cloud Build”.
  5. In the resulting card for the “Cloud Build API”, if you do not see “API enabled”, click the ENABLE button.
  6. Use the Back button to return to the previous screen with a search box. In the search box, enter “Container Registry”.
  7. Click card “Google Container Registry API”. If you do not see “API enabled”, click the ENABLE button.

  8. Click the “Activate Cloud Shell” icon. Drag the Console devider to see more.
  9. Create file: nano quickstart.sh

    #!/bin/sh
    echo "Hello, world! The time is $(date)."
    
  10. Press ctrl+S to save and ctrl+X to exit.
  11. Create file: nano Dockerfile

    FROM alpine
    COPY quickstart.sh /
    CMD ["/quickstart.sh"]
    
  12. Press ctrl+S to save and ctrl+X to exit.

  13. In Cloud Shell, build an image based on the “Dockerfile”:

    gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/quickstart-image .

    Notice the dot at the end to specify that the source filecho $?e is in the current working directory.

  14. Authorize Cloud Shell.
  15. Create a soft link as a shortcut to the working directory ~/ak8s:.

    ln -s ~/training-data-analyst/courses/ak8s/v1.1 ~/ak8s
  16. Get the repo (wait for it to finish):

    git clone https://github.com/GoogleCloudPlatform/training-data-analyst
    ln -s ~/training-data-analyst/courses/ak8s/v1.1 ~/ak8s
    cd ~/ak8s/Cloud_Build/a
    
  17. Confirm on the Navigation menu UI: scroll down to TOOLS section. Click Container Registry to select Images. Click quickstart-image for a list.

  18. Run Google Cloud Build:

    cd ~/ak8s/Cloud_Build/b
    gcloud builds submit --config cloudbuild.yaml .
    
  19. Confirm whether the command shell knows the build failed (returns 1 instead of 0):

    echo $?
  20. cat cloudbuild.yaml

    ...
    steps:
           - name: 'gcr.io/cloud-builders/docker'
      args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
    images:
           - 'gcr.io/$PROJECT_ID/quickstart-image'
    
  21. Start a Cloud Build

    gcloud builds submit --config cloudbuild.yaml .
  22. Back in the Navigation menu UI, click Container Registry > Images and then click quickstart-image to see two versions of quickstart-image listed (a and b).

Google Kubernetes Engine (GKE)

kubernetes-pods-599x298-35069

https://google-run.qwiklab.com/focuses/639?parent=catalog

PROTIP: For GKE we disable all legacy authentication, enable RBAC (Role Based Access Control), and enable IAM authentication.

Pods are defined by a manifest file read by the apiserver which deploys nodes.

Pods go into “succeeded” state after being run because pods have short lifespans – deleted and recreated as necessary.

The replication controller automatically adds or removes pods to comply with the specified number of pod replicas declared are running across nodes. This makes GKE “self healing” to provide high availability and reliability with “autoscaling” up and down based on demand.

PROTIP: The virtual reality mobile game Pokemon Go released in 2018 was the largest deployment of GKE at the time.

In this diagram:

  1. Create demo-cluster:

    gcloud container clusters create demo-cluster --num-nodes=3
    

    kubectl create deployment demo-app –image=gcr.io/demo-project-123/demo:1.0

    kubectl expose deployment demo-app –type=LoadBalancer –port 5000 –target-port 5000

  2. List all pods, including in the system namespace:

    kubectl get nodes --all-namespaces
    
  3. Scale:

    kubectl scale deployment demo-app --replicas=3
    
  4. Loop responses:

    while true; do sleep 0.1; curl http://xx.xx.xx.xxx:5000/; echo -e; done
    
  5. Delete GKE cluster:

    kubectl delete service demo-app
    gcloud container cluster delete demo-cluster
    gcloud container images delete gcr.io/demo-project-123/demo:1.0
    gcloud container images delete gcr.io/demo-project-123/demo:2.0
    

Amazon AWS ECS & EKS & KOPS

k8s-aws-kubernauts

Amazon ECS (Elastic Container Service for Kubernetes) is “supercharged” by the
Amazon EKS (Elastic Kubernetes Service), which provides deeper integration into AWS infrastructure (than ECS) for better reliability (at higher cost). Amazon said it runs upstream K8s, not a fork (such as AWS ELasticSearch), so it should be portable to other clouds and on-premises.

ECS is free since Amazon charges for the underlying EC2 instances and related resources for each task ECS runs.

But each EKS cluster costs an additional $144 USD per month (20 cents per hour in the lowest cost us-east-1 region), for EKS to administer a “Control Plane” across Availability Zones.

The diagram (from cloudnaut) illustrates the differences between ECS vs. EKS clusters.

eks-ecs-load-balacing-960x720-32943.png

ECS uses an Application Load Balancer (ALB) to distribute load servicing clients. When EKS was introduced December 2017, it supported only Classic Load Balancer (CLB), with beta support for Application Load Balancer (ALB) or Network Load Balancer (NLB).

Within the cluster, distribution among pods can be random or based on the round robin algorithm.

EKS incurs additional cross-AZ network traffic charges because, to ensure high availability, EKS runs within each node a proxy to distribute traffic in and out of pods across three Kubernetes masters across three Availability Zones. So this additional processing may also require larger instance types, which EKS automatically selects.

Instance type selection is an important consideration because AWS limits the number of IP Addresses per network interface based on instance size, from 2 to a max of 15. Not all AWS EC2 instance types are equipped with the Elastic Network Interface (ENI) that ECS and EKS need to virtually redistribute load among pods. Both ECS and EKS detects and automatically replaces unhealthy masters, provide version upgrades, and automated patching for masters. A secondary private IPv4 network interface is used so that in the event of an instance failure, that interface and/or secondary private IPv4 address can be transferred to a hot standby instance by EKS.

eks-ecs-vpc-eni-960x720-31322

While ECS assigns separate ENI to each ECS task (a group of containers), EKS attaches multiple ENIs per instance, with multiple private IP addresses assigned to each ENI. Since EKS shares network interfaces among pods, a different Security Group cannot be specified to restrict a specific pod.

k8s-networking-920x840

Moreover, network interfaces, multiple private IPv4 addresses, and IPv6 addresses are only available for instances running under a isolated VPC (Virtual Private Cloud) and perhaps with AWS PrivateLink access. So EKS requires AWS VPC. For best isolation (rather than sharing), create a different VPC and Security Group for each cluster.

Both ECS and EKS is accessed from its ECS CLI console and supports ECS API commands and Docker Compose. AWS CloudTrail logging.

Also, EKS leverage IAM authentication, but did not provide out-of-the-box support Task IAM Roles (pods) used to grant access to AWS resources like ECS (AmazonEKSClusterPolicy and AmazonEKSServicePolicy).

For example, to allow containers to access S3, DynamoDB, SQS, or SES at runtime.

Behind the scenes, Amazon used Hashicorp Packer config. scripts to make EKS-optimized AMIs run on Amazon Linux 2. The machines are preconfigured with Docker, kubelet, and the AWS/Heptio AMI Authenticator DaemonSet, plus a EC2 User Data bootstrap script that automatically join an EKS cluster. AMIs that have GPU support are also generated for users who have defined a AWS Marketplace Subscription.

See the EKS Manifest diagram explained by Mark Richman (@mrichman) in his video class, with code at https://github.com/linuxacademy/eks-deep-dive-2019.

PROTIP: My sample.sh installs the utilities and brings up a EKS cluster with one command. It costs $110 per month.

EKS makes use of AWS Fargate Launch Type provides for horizontal scaling on Amazon’s own fleet of EC2 clusters. It’s informally called the “AWS Container Manager”.

Fargate supports “awsvpc” network mode natively so that tasks running on the same instance share that’s instance’s ENI.

“Once you do get your cluster running, there’s nothing to worry about except monitoring performance and, as demand changes, adjusting the scale of your service.” – David Clinton*

This totalcloud.io article compares ECS, EKS, and Fargate.

A concern with Fargate is its time to load.

  1. Manage EKS nodepgroups:

    eksctl get nodegroup --cluster=demo-cluster-ec2
    eksctl scale nodegroup --cluster=demo-cluster-ec2 --nodes=1 --name=ng-exxx
    
  2. Delete to stop charges:

    kubectl delete service demo-app
    eksctl delete cluster --name demo-cluster-ec2
    aws ecr list-images --repository-name demo
    aws ecr batch-delete-image --repository-name demo --image-ids xxx
    aws ecr delete-repository --repository-name demo --force
    

Microsoft’s Azure Kubernetes Service (AKS)

VIDEO “K8s on MS Azure”

az-k8s-flow-2236x1258 *

AKS manages the Control Plane master node.

kubectl is included as part of the Azure Cloud Shell.

SF (Service Fabric) is the core technology.

ACR (Azure Container Regustry) stores Docker images (like DockerHub).

  • Users are named like “user1.azurecr.io”
  • Change the “$mine.yaml” to specify use of ACR user instead of “Microsoft” in Dockerhub.
  • Lock down Container Registry access
  • RBAC QUESTION: Azure and Notary to reference images as certs?

ACI (Azure Container Instances) provides hypervisor isolation. See Quickstart: hands-on Deploy AKS to ACI ACI (Azure Container Instances) connector :

    az container create --resource-group myResourceGroup -- name mycontainer --image microsoft/acl-helloworld --dns-name --label myClustre --port 80

Deploy a model as web service on Azure Container Instances by combining ACI with ACI Logic Apps connector, Azure queues, Azure Functions, Azure Machine Learning to

  1. To run an app in AKS, post App Descriptor to the K8s API Server, and Scheduler schedules worker nodes.
    • kubtl apply -f “$mine.yaml”
    • kubectl get service “$APPNAME” –watch
    • kubectl scale replicas=3 “deployment/$APPNAME”
    • kubectl get pods
    • Scale by CPU: az aks scale –name $appname –node-count 3 \ –resource-group $container_rg

  2. Add metrics to Container service:
    • Number of pods by phase
    • Number of pods in Ready state
    • Total amount of available memory in a managed cl…
    • Total number of available cpu cores in a managed …

  3. To scan containers, add from Marketplace one of these:
    • Twistlock
    • Aqua cloud native security platform
    • Sysdig

References:

Microsoft Draft

Microsoft created Draft (like Scaffold) to simplify getting started in Azure to lift-and-shift Windows ASP.NET apps. It has two commands:

    
       draft create  # helm chart and Dockerfile
       draft up      # deploy

Draft uses language packs for Ruby, C# .NET Core 2.2 with Windows packs, authenticated to Azure Container Registry (ACR) and AKS.


Other clouds


Other Orchestration systems managing Docker containers

  • OpenShift dedicated
  • OpenShift Online (cloud-based)
  • Kubernetes by Google
  • Centos
  • Atomic
  • Consul, Terraform
  • Serf
  • Cloudify
  • Helios

Competing Orchestration systems

  • Docker Swarm incorporated Rancher from Rancher Labs (#RancherK8s).

    Rancher Kubernetes Engine (RKE) simplifies cluster administration (on EC2, Azure, GCE, Digital Ocean, EKS, AKS, GKE, vSphere or bare metal) - (provisiong, authentication, RBAC, Policy, Security, monitoring, Capacity scaling, Cost control). Its catalog is based on Helm. See Creating an Amazon EC2 Cluster using Rancher.

  • Mesosphere DC/OS (Data Center Operating System) runs Apache Mesos to abstract CPU, memory, storage to provide an API to program a multi-cloud multi-tenant data center (at Twitter, Yelp, Ebay, Azure, Apple, etc.) as if it’s a single pool of resources. Kubernetes can run on top of it, but the DC/OS has premium (licensed) enterprise features. So it’s not for you if you never want to pay for anything.

    Mesos from Apache, which runs other containers in addition to Docker. K8SM is a Mesos Framework developed for Apache Mesos to use Google’s Kubernetes. Installation.

    See Container Orchestration Wars (2017) at the Velocity Conf 19 Jun 2017 by Karl Isenberg (@karlfi) of Mesosphere

  • Hashicorp Nomad is a lighter-weight orchestrator, not just for containers.

  • Red Hat (which IBM bought in 2018) offers its OpenShift to enable Docker and Kubernetes for the enterprise by adding external host names (projects) that add role-based security around namespaces. OpenStack enables running of k8s containers in other clouds or within private data centers.

    OpenShift runs under OKD (Origin Kubernetes Distribution) which include a container and Istio mesh. NOTE: IBM is pushing its “containerd”, its replacement for Docker. Azure uses containerd by default.

    See https://www.redhat.com/en/technologies/cloud-computing/openshift,


Each cluster has a master and several nodes.

Nodes

Each node is created with a kubelet process, container tooling (Docker), kube-proxy, supervisord.

Internally, Kubernetes itself does NOT create nodes.

Cluster admins use the kubeadm CLI to create nodes and add them to Kubernetes.

  1. To list resource usage across nodes of the cluster:

    kubectl top nodes
    NAME                            CPU(cores)   CPU%  MEMORY(bytes)  MEMORY%
    gke-standard-cluster-1-def...   29m          3%    431Mi          16%
    
  2. To list resource usage across pods of the cluster:

    kubectl top pods
    NAME                            CPU(cores)   CPU%  MEMORY(bytes)  MEMORY%
    gke-standard-cluster-1-def...   29m          3%    431Mi          16%
    

GCP GKE masters

Within a GCP, GKE provides the master node Kubernetes Control Plane components, which include node creation by deploying and registering Google Compute Engine instances as nodes.

GKE exposes IP addresses, which can be isolated from the public internet.

GCP does not charge for the master, which is an abstract part of the GKE service not exposed to GCP customers

Each Google regional cluster spans several physical Zones, each with a master and its worker nodes. The same number of nodes is the same in each zone.

Multiple GCP projects can run on a single cluster.

Use the Google Console to specify the size of hardward in each node pool (a GKE feature).

Master Node (Control Plane)

  • Kubernetes Control Plane security: https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security

All master components (API server, etcd database, Controller Manager) is collectively called the Kubernetes Control Plane. are managed by Google/AWS.

Secure communications between the master and nodes within a cluster automatically relies on the shared root of trust provided by certificates issued by a CA. Each cluster has its own root Certificate Authority (CA). An internal Google service manages root keys for the CA, so you can’t manually rotate the etcd certificates and GKE.

GKE uses a separate per cluster CA to provide certificates for the etcd databases within a cluster.

Separate CA’s are used for each separate cluster. When a new node of a Kubernetes cluster is created, the node is injected with a shared secret as part of its creation. This secret is then used by its kubelet to submit certificate signing requests to the cluster root CA. That way, it can get client certificates when the node is created, and new certificates when they need to be renewed or rotated

Secret can be accessed by pods and by extension their containers, unless metadata concealment is enabled.

  1. Create a new IP address for the cluster master along with its existing IP address.

New credentials are issued to the control plane. Note that the API server will not be available during this period although pods continue to run. After the masters reconfigured, the nodes are automatically updated by GKE to use the new IP and credentials.

This causes GKE to also automatically upgrade the node version to the closest supported version. All of your API clients outside the cluster must also be updated to use the new credentials. Rotation must be completed for the cluster master to start serving with the new IP address and new credentials, and remove the old IP address and old credentials. If the rotation is not completed manually, GKE will automatically complete the rotation after seven days.

Note that you can also rotate the IP address for your cluster. This essentially goes through the same process because their certificates must be renewed when the master IP address is changed, but with different commands:*

  1. Initiate credential rotation:

    gcloud container clusters update [CLUSTER-NAME] --start-credential-rotation
  2. Complete credential rotation:

    gcloud container clusters update [CLUSTER-NAME] --complete-credential-rotation
  3. Initiate IP rotation:

    gcloud container clusters update [CLUSTER-NAME] --start-ip-rotation
  4. Complete IP rotation:

    gcloud container clusters update [CLUSTER-NAME] --complete-ip-rotation

Pods can access the metadata of the nodes that they’re running on, such as the node secret that is used for node configuration. If a pod is compromised, this could potentially be used in unintended ways. To prevent such exposure, always configure the Cloud IAM service account for the node with minimal permissions.

But don’t confuse as Google service account with the Kurbenetes Service account. This is the Cloud IAM service account used by the node VM itself.

Don’t use the compute.instances.get permission through a service account, compute instance admin role, or any custom roles. Omitting this permission blocks holders of the role from getting metadata on GKE nodes by making direct Compute Engine API calls to those nodes.

Disable legacy metadata APIs. V1 APIs restrict the retrieval of metadata. But Compute Engine API endpoints using versions 0.1 and V1 beta-1, support querying of metadata.

From GKE version 1.12+, legacy Compute Engine metadata endpoints are disabled by default. With earlier versions, they can only be disabled by creating a new cluster or adding a new node port to an existing cluster.

metadata concealment

To prevents a pod from accessing node metadata, there is a temporary solution that will be deprecated as better security improvements are developed in the future. It does this by restricting access to cube NF which contains cubic credentials and the virtual machines instance identity token. See “protecting cluster metadata”

Pod Security contexts

By default, containers inside a pod allow privilege elevation, and can access the host file system and the host network. But although convenient, that can be undesirable from a security perspective.

To restrict what containers in a pod can do, set security contexts in the pod specification so it’s applied to all of the pod’s containers.

  1. To display the current context ID within GKE:

    kubectl config current-context
    gke_[PROJECT_ID]_us-central1-a_standard-cluster-1
  2. To list all cluster contexts’ namespace and AUTHINFO:

    kubectl config get-contexts

  3. To change context:

    kubectl config use-context gke_${GOOGLE_CLOUD_PROJECT}_us-central1-a_standard-cluster-1

    Using security contexts in a pod definition, you can exercise a lot of control over the use of the host namespace, networking, file system, and volume types, whether privilege containers can run, and whether code in the container can escalate to root privileges.

    This sample privileged-pod.yaml is used to define a pod’s security policy:

    kind: Pod
    apiVersion: v1
    metadata:
      name: privileged-pod
    spec:
      containers:
             - name: privileged-pod
     image: nginx
     securityContext:
       privileged: true
    

    This sample provides specific user and group context for containers:

    ...
    spec:
      securityContext:
     runAsUser: 1000
       fsGroup: 2000
    

    runAsUser ID “1000” for any containers in the pod. This should not be zero because, in a Linux system, zero is the privileged root user’s User ID. Taking away root privilege from the code running inside the container limits what it can do in case of compromise.

    fsGroup ID “2000” is associated with all containers in the pod.

    • Enable #Seccomp to block code running in containers from making system calls.
    • Enable AppArmor to restrict individual program actions.

Such direct configuration of security contexts in each individual pod can be a lot of work.

Pod Security policies (PSP)

A request can be passed through multiple controllers. If the request fails at any point, the entire request is rejected immediately, with the end user receiving an error.

Pod security policies apply to multiple pods without having to specify and manage those details in each pod definition. Defining pod security policies creates reusable security contexts. It’s easier to define and manage security configurations separately, and then apply them to the pods that need them.

Each pod security policy consists of an object and an admission controller.

The pod security policy object (a set of restrictions, requirements, and defaults) are defined in the same way as a security context inside a pod, and can be used to control the same security features.

The pod security policy admission controller acts on the creation and modification of pods.

During the creation or update of a pod, the Container Runtime enforces pod security policies based on the requested security context which defines whether the pod should be admitted.

For pod to be admitted to the cluster, it must fulfill all of security conditions defined in the pod security policy. These rules are only applied when a pod is being created or updated.

The pod security policy admission controller validates or modifies requests to create or update pods against security policies. A non-mutating admission controller just validates requests. A mutating and mission controller can modify and validate requests.

apiversion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: demo-psp
spec:
  privileged: false   # Don't allow privileged pods
  allowPrivilegeEscalation: false
  volumes: 
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  selLinux:
    rule: 'RunAsAny'
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: 'MustRunAsNonRoot'
  readOnlyRootFileSystem: false
  volumes: 
  - '*'
   

After defining a pod security policy, authorize it. Otherwise it’ll prevent other Pods from being created.

Securing Google Kubernetes Engine with Cloud IAM and Pod Security Policies 90m.

More fine grained/dynamic policies can be defined by third-party add-on Styra for K8s, which is a use case for Styra’s more generic OPA (Open Policy Agent) policy language which decouples a policy model from app code. Since OPA API works for many products and services it provides a unified toolset and framework for policy enforcement across the cloud native stack.

This is similar to what Terraform Enterprise provides.

In the future, policies can be generated from AI/ML model processing, perhaps dynamically.

ClusterRoles

You can authorize a policy using Kubernetes Role-Based Access Control.

Here, a clusterRole allows the pod security policy to be used: restricted-pods-role.yaml

apiVersion: rbac.authorization.k8s.io/vi
kind: ClusterRole
metadata:
  name: psp-clusterole
rules:
- apiGroups:
  - extensions
  resources:
  - podsecuritypolicies
  resourceNames:
  - demo-psp
  verbs:
  - use
   

Next define a role binding to bind the previous cluster role to users or groups. In this example, two subjects for the role binding are specified.

apiVersion: rbac.authorization.k8s.io/vi
kind: RoleBinding
metadata:
  name: psp-rolebinding
  namespace: demo
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: psp-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system.serviceaccounts
- kind: ServiceAccount
  name: service@example.com
  namespace: demo
   

The first is a group containing all service counts within the demo name-space. The other is a specific service account in the demo name space. The role binding can grant permission to the creator of the pod, which might be a deployment, replica set or other template controller.

It can grant permission to the created pods service account. Note that granting the controller access to the policy, would grant access for all pods created by that controller. So the preferred method for authorizing policies is to grant access to the pods service account. Without a pod security policy controller, pod security policies mean nothing. You need both to define policies, and to enable the pod security policy controller. Careful, the order here matters. If you enable the pod security policy controller before defining any policies, you’ve just commanded that nothing is allowed to be deployed. In GKE, the pod security policy controller is disabled by default. If you choose to use pod security policies, first define them, and then enable the controller with the G-Cloud command shown here. Name represents the name of your cluster.

You can take additional security measures in kubernetes, and many of these are enabled by default in GKE, especially if you choose to run recent versions of kubernetes in your GKE cluster.

For example, GKE by default uses Google’s container optimized OS for the node OS. Unlike a general purpose Linux distribution, the container optimized OS implements a minimal read-only file system. Performs system integrity checks and implements firewalls, audit logging, and automatic updates. You can enable node auto upgrades to keep all of your nodes running the latest version of Kubernetes. You can choose to run private clusters, which contain nodes without external IP addresses. You can also choose to run the cluster master for a private cluster without a publicly reachable end point using Master authorized networks. By default, private clusters do not allow TCP IP addresses to access the cluster master end point. Using private clusters with master authorized networks, makes your cluster master reachable only by the specific address ranges that you choose. Nodes within your cluster VPC network can still access the master, and so can Google’s internal production jobs that manage it for you. Make sure to protect your secrets by using encrypted secrets. To store sensitive configuration information rather than storing them in config maps. Whenever possible, grant privilege to groups rather than individual users. This applies both to Cloud IAM which lets you grant rules to Google groups, as well as kubernetes are back which lets you grant roles to kubernetes groups. Suppose you grant privileges to an administrator named Pat in many places and then Pat leaves your company. You now must track down all the places where Pat has privileges, and remove them. That’s tedious and error-prone.

If you follow the best practice of always granting privileges to groups rather than to users, you can remove Pat’s access simply by taking Pat out of the administrator group.

Qwiklab: Implementing Role-Based Access Control with Google Kubernetes Engine:


Master node

Nodes are joined to the master node using the kubeadm join program and command.

The master node runs the kube-apiserver and componenets etcd, controller, scheduler.

The master node itself is crated by the kubeadm init command which establishes folders and invokes the Kubernetes API server. That command is installed along with the kubectl package (pronounced “cube cuddle”). There is a command with the same name used to obtain the version.

  1. View memory and CPU usage of pods across nodes from the K8s Metrics Server:

    kubectl top node
    kubectl top pod

API Server

The kubectl client communicates using REST API calls to an API Server which handles authentication and authorization.

kubectl get apiservices

API’s were initially monolithic but has since been split up into:

  • core “” to handle pod & svc & ep (endpoint)
  • apps to handle deploy, sts, ds
  • authorization to handle role, rb
  • storage to handle pv (persistent volume) and pvc, sc (storage classes)

Kube-proxy

The kube-proxy maintains network connectivity among the Pods in a cluster.

kube-proxy watches the API server for addition and removal requests. For each new service, kube-proxy opens a randomly chosen port on the local node. It then makes proxied connections to one of the corresponding back-end pods.

The “proxy” in kube-proxy means that it can do simple network stream or round-robin forwarding across a set of backends.

Three modes:

  • User space mode
  • Iptables mode
  • Ipvs mode (alpha as of v1.8)

Kubelet

A Kublet agent program is automatically installed in each node created.

Kubelet serves as Kubernetes’s agent on each node.

Kubelet only manages containers created by the API server - not any container running on the node.

Kublet communicates with the API server to see if pods have been assigned to nodes.

Kubelets communicate with the Kubernetes API server using secured network communications protocols TLS and SSH based on certificates issued by the clusters root CA to support those protocols.

Kubelet takes a set of Podspecs provided bythe kube-apiserver to ensure that containers described are running and healthy.

Kubelet mounts and runs pod volumes and secrets.

Image pull secrets authenticates with private container registries.

Kubelet executes health checks to identify pod/node status.

Service accounts can also store image pull secrets.

Control Plane

Each kubelet manages the “Control Pane” which allocates IP addresses and runs nodes under its control.

Kublet constantly compares the status of pods against what is declared in yaml files, and starts or deletes pods as necessary to meet the request.

Restarting Kublet itself depends on the operating system (monit on Debian or systemctl on systemd-based systems).

RBAC (Role-Based Access Control)

Scheduler Pod stats

The API Server puts nodes in “pending” state when it sends requests to bring them up and down to the Scheduler to do so only when there are enough resources available. The scheduler operate according to a schedule.

Pod phases:

  1. Pending - accepted, but being scheduled (being pull from repo)
  2. Running after being attached to a node and containers created
  3. Succeeded means all containers are running (terminated as specified)
  4. Failed -
  5. Unknown - communication error
  6. CrashLoopBackOff - pod not configured correctly

    perf tunint

    Rules obeyed by the Scheduler about pods are called “Tolerances”.


Taints and Tolerations

  • REF:
  • https://mckinsey.udemy.com/course/certified-kubernetes-application-developer/learn/lecture/12903100#notes

KLab: Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes and pods.

  • Taints on nodes with keyname=value:effect in commands targeting nodes.

  • Tolerations on pods in PodSpec yaml with matching taints.

Taints repel from nodes

To taint nodes, KLab:

  1. Use the taint nodes subcommand to specify to the Scheduler a node to repel pods matching the key:

    kubectl taint nodes node1 keyname=value:taint-effect dedicated=group1:NoSchedule

    taint-effect defines what happens to pods which do not tolerate the taint.

    • NoSchedule

    • effect: “PreferNoSchedule” defines a “preference” or “soft” version of NoSchedule – the system will try to avoid placing a pod that does not tolerate the taint on the node, but it is not required.

    • effect: “NoExecute” causes any pods that do not tolerate the taint to be evicted immediately, and pods that do tolerate the taint will never be evicted.

  2. PROTIP: tolerationSeconds: 3600 optionally added to NoExecute effect dictates how many seconds the pod stays bound to the node after the taint is added. If this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, the pod will not be evicted.

    NOTE: No more than one taint can be applied to a node.

  3. PROTIP: Remove a taint by a dash after the taint effect:

    kubectl taint nodes node1 key=value:NoSchedule-

    Tolerations attract into pods

  4. Tolerate (ignore taints) in PodSpec yaml spec: to allow (but do not require) certain pods to schedule onto nodes with matching taints.

    NOTE: Tolerations are one of a few PodSpec items which can be edited while active, along with containers[].image, initContainers[].image, and Job activeDeadlineSeconds.

spec:
  ...
  tolerations:
  - key: "app"
    operator: "Equal"
    value: "blue"
    effect: "NoSchedule"
   

The equivalent imperative command format:

kubectl taint nodes node1 app=blue:NoSchedule
   
  1. Such details are reaveled using the kubectl describe nodes command.

    kubectl edit pod pod name

    If attempt fails, the file is saved to /tmp/kubectl-edit-ccvrq.yaml

NodeSelectors

For pods defined with nodeSelector such as:

  nodeSelector:
    size: Large
   

That k8s matched them with nodes specs are defined:

???

nodeAffinity & podAntiAffinity

  • KK VIDEO
  • https://www.coursera.org/learn/deploying-workloads-google-kubernetes-engine-gke/lecture/aJh3H/affinity-and-anti-affinity

“Node affinity” is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). The Node controller uses built-in taints to specify conditions: “network-unavailable”, “unshedulable”, “cloudprovider unitialized”, “not-ready”, “memory-pressure”, “disk-pressure”, “out-of-disk”,

spec:
  containers:
  ...
  affinity:
    nodeAffinity:
    requiredDuringScheduleingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: size
          operator: In
          values:
          - Large
   

Affinity settings are used to put in each zones one and only one web server Pod and cache Pod:

k8s-affinity-zones-1554x778

Types:

  1. requiredDuringScheduleingIgnoredDuringExecution:
  2. preferredDuringScheduleingIgnoredDuringExecution:
  3. requiredDuringScheduleingRequiredDuringExecution: Put another way:
.DuringSchedulingDuringExecution
Type 1 Required Ignored
Type 2 Preferred Ignored
Type 3 Required Required

Alternatively: To a single Node with nodeAffinity:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 100 Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node-name
   
volumeMode: Block

persistentVolumeReclaimPolicy (Recycling) policies are:

  • Delete
  • Retain (keep the contents)
  • Recycle (Scrub the contents).

Extract pod yaml from running podspec

kubectl get pod <pod name> -o yaml > my-new-pod.yaml </pod>

   https://kodekloud.com/courses/kubernetes-certification-course-labs/lectures/12039454




### etcd storage 

   The API Server and Scheduler persists their configuration and status information in a 
   ETCD cluster 
   
   (from CoreOS).
   
   Kubernetes data stored in etcd includes jobs being scheduled, created and deployed, pod/service details and state, namespaces, and replication details.

   It's called a cluster because, for resiliancy, etcd replicates data across nodes. This is why there is a minimum of two worker nodes per cluster.




### eksctl

1. See https://eksctl.io about installing the eksctl CLI tool for creating clusters on EKS. It is written and supported (via Slack) by GitOps vendor weave.works in Go, and uses CloudFormation. 

1. To create a EKS cluster:

   
eksctl create cluster
### Node Controllers and Ingress The Node controller assigns a CIDR block to newly registered nodes, then continually monitors node health. When necessary, it taints unhealthy nodes and gracefully evicts unhealthy pods. The default timeout is 40 seconds. Load balancing among nodes (hosts within a cloud) are handled by third-party port forwarding via Ingress controllers. See Ingress definitions. An "Ingress" is a collection of rules that allow inbound connections to reach the cluster services. Ingress Resource defines the connection rules. In Kubernetes the Ingress Controller could be a NGINX container providing reverse proxy capabilities. In Google Kubernetes Engine, by default LoadBalancers give access to a regional Network Load Balancing configuration. To get access to a global HTTP(S) Load Balancing configuration, use an Ingress object. ### Plug-in Network PROTIP: Kubernetes uses third-party services to handle load balancing and port forwarding through ingress objects managed by an ingress controller. CNI (Container Network Interface) spec An alternative is kubenet Other CNI vendors include Calico, Cilium, Contiv, Weavenet. Flannel on Azure? 1. Find which cni is installed:
ps -ef | grep cni
student   3638  9589  0 23:24 pts/0    00:00:00 grep --color=auto cni
root      9735     1  3 Oct07 ?        00:54:09 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf 
--kubeconfig=/etc/kubernetes/kubelet.conf 
--config=/var/lib/kubelet/config.yaml 
--network-plugin=cni 
--pod-infra-container-image=k8s.gcr.io/pause:3.2
   
1. View cni installer files (to troubleshooting network configuration issues):
sudo more $(sudo find / -name *install-cni* | grep /log/containers)
sudo less /var/log/calico/cni/cni.log sudo less /etc/cni/net.d/calico-kubeconfig

cAdvisor

To collect resource usage and performance characteristics of running containers, many install a pod containing Google’s Container Advisor (cAdvisor). It aggregates and exports telemetry to an InfluxDB database for visualization using Grafana.

Google’s Heapster is also be used to send metrics to Google’s cloud monitoring console.


Containers are declared by yaml such as this to run Alphine Linux Docker container:

apiVersion: v1
kind: Pod
metadata:
  name: alpine
  namespace: default
spec:
  containers:
  - name: alpine
    image: alpine
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
   

Other command:

command:
    - sh
    - "-c"
    - echo Hello Kubernetes! && sleep 3000
    

Nodes Architecture diagram

Yongbok Kim (who writes in Korean) posted (on Jan 24, 2016) a master map of how all the pieces relate to each other:
Click on the diagram to pop-up a full-sized diagram: k8s_details-ruo91-350x448.jpg

BTW What are now called nodes were previously called “minions”, perhaps in deference to NodeJs, which refers to nodes differently.

Klab: Nodes are managed together within each namespace.

Testing K8s

  1. Dry-run

    kubectl create -f pod.yaml --dry-run=client

End-to-end tests by those who develop Kubernetes are coded in Ginko and Gomega (because Kubernets is written in Go).

The Kubtest suite builds, stages, extracts, and brings up the cluster. After testing, it dumps logs and tears down the test rig.


Installation options

There are several ways to obtain a running instance of Kubernetes.

Rancher

Rancher is a deployment tool for Kubernetes that also provides networking and load balancing support. Rancher initially created it’s own framework (called Cattle) to coordinate Docker containers across multiple hosts, at a time when Docker was limited to running on a single host. Now Rancher’s networking provides a consistent solution across a variety of platforms, especially on bare metal or standard (non cloud) virtual servers. In addition to Kubernetes, Rancher enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos upstream project for DCOS (Data Center Operating System). Rancher eventually become part of Docker Swarm.

Within KOPS

Minikube offline

B) Minikube spins up a local environment on your laptop.

NOTE: Ubuntu on LXD offers a 9-instance Kubernetes cluster on localhost.

PROTIP: CAUTION your laptop going to sleep may ruin minikube.

Server install

C) install Kubernetes natively on CentOS.

D) Pull an image from Docker Hub within a Google Compute or AWS cloud instance.

CAUTION: If you are in a large enterprise, confer with your security team before installing. They often have a repository such as Artifactory or Nexus where installers are available after being vetted and perhaps patched for security vulnerabilities.

See https://kubernetes.io/docs/setup/pick-right-solution

### On GCP

  1. On GCP:

    gcloud container clusters get-credentials guestbook2

kubectl get pods –all-namespaces


OS for K8s

As a brainchild of the Linux Founderation, one would expect Kubernetes to run on different flavors of Linux.

CentOS

First, install kubeadm

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
   

Also:

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
   

Ubuntu

  1. On Ubuntu, install:

    apt install -y docker.io
  2. To make sure Docker and Kublet are using the same systemd driver:

    cat <<EOF >/etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
  3. Install the keys:

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  4. sources:

    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    <deb http://apt.kubernetes.io/ kubernetes-xenial main
    <EOF
  5. To download new sources:

    apt update
  6. To download the programs:

    apt install -y kubelet kubeadm kubectl

Architectural Details

This section further explains the architecture diagram above.

This sequence of commands:

  1. Select “CloudNativeKubernetes” sandboxes.
  2. Select the first instance as the “Kube Master”.
  3. Login that server (user/123456).
  4. Change the password as prompted on the Ubuntu 16.04.3 server.

    Deploy Kubernetes master node

  5. Use this command to deploy the master node which controls the other nodes. So it’s deployed first which invokes the API Server

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    kubernetes-nodes-363x120-20150

    The address is the default for Flannel.

    Flow diagram

    k8s-services-flow-847x644-100409

    The diagram above is by Walter Liu

    Flannel for Minikube

    When using Minikube locally, a CNI (Container Network Interface) is needed. So setup Flannel from CoreOS using the open source Tectonic Installer (@TectonicStack). It configures a IPv4 “layer 3” network fabric designed for Kubernetes.

    The response suggests several commands:

  6. Create your .kube folder:

    mkdir -p $HOME/.kube
  7. Copy in a configuration file:

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  8. Give ownership of “501:20”:

    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  9. Make use of CNI:

    sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube.flannel.yml

    The response:

    clusterrole "flannel" created
    clusterrolebinding "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel.cfg" created
    daemonset "kube-flannel.ds" created
    

    ConfigMaps in cfg files are used to define environment variables.

  10. List pods created:

    kubectl get pods --all-namespaces -o wide

    Specifying wide output adds the IP address column

    Included are pods named:

    • api server (aka “master”) accepts kubectl commands
    • etcd (cluster store) for HA (High Availability) in control pane
    • controller to watch for changes and maintain desired state
    • dns (domain name server)
    • proxy load balances across all pods in a service
    • scheduler watches api server for new pods to assign work to new pods

    System administrators control the Master node UI in the cloud or write scripts that invoke kubectl command-line client program that controls the Kubernetes Master node.

    Kubernetes in 5 mins Desired State Management

    Proxy networking

    The Kube Proxy communicates only with Pod admin. whereas Kubelets communicate with individual pods as well.

    Each node has a Flannel and a proxy.

    The Server obtains from Controller Manager ???

  11. Switch to the webpage of servers to Login to the next server.
  12. Be root with sudo -i and provide the password.
  13. Join the node to the master by pasting in the command captured earlier, as root:

    kubeadm join --token ... 172.31.21.55:6443 --discovery-token-ca-cert-hash sha256:...

    Note the above is one long command. So you may need to use a text editor.

    Deployments manage Pods.

    kubernetes-ports-381x155-19677

  14. Switch to the webpage of servers to Login to the 3rd server.
  15. Again Join the node to the master by pasting in the command captured earlier:
  16. Get the list of nodes instantiated:

    kubectl get nodes
  17. To get list of events sorted by timestamp:

    kubectl get events --sort-by='.metadata.creationTimestamp'
  18. Create the initial log file so that Docker mounts a file instead of a directory:

    touch /var/log/kube-appserver.log
    
  19. Create in each node a folder:

    mkdir /srv/kubernetes
    
  20. Missing: Get a utility to generate TLS certificates:

    brew install easyrsa
    
  21. Run it:

    ./easyrsa init-pki
    

    Master IP address

  22. Run it:

    MASTER_IP=172.31.38.152
    echo $MASTER_IP
    
  23. Run it:

    ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`* build-ca nopass
    

    Watchers

    To register watchers on specific nodes.??? Kubernetes supports TLS certifications for encryption over the line.

    REST API CRUD operations are used

    The K8s Admission Controller enables less coding in yaml files by adding what is necssary.

    kubectl details? 
  24. Put in that folder (in each node):

    • basic_auth.csv user and password
    • ca.crt - the certificate authority certificate from pki folder
    • known_tokens.csv kublets use to talk to the apiserver
    • kubecfg.crt - client cert public key
    • kubecfg.key - client cert private key
    • server.cert - server cert public key from issued folder
    • server.key - server cert private key

  25. Copy from API server to each master node:

    
    cp kube-apiserver.yaml  /etc/kubernetes/manifests/
    

    The kublet compares its contents to make it so, uses the manifests folder to create kube-apiserver instances.

  26. For details about each pod:

    
    kubectl describe pods
    

    Expose

    Deploy service

  27. To deploy a service:

    kubectl expose deployment *deployment-name* [options]

Container Storage Interface (CSI)

Configmap

VIDEO: Nana

Use ConfigMaps as environment variables or using a volume mount in a specific namespace.

env:
  - name: SPECIAL_LEVEL_KEY
    valueFrom:
      configMapKeyRef:
        name: special-config
        key: special.how

Within a pod manifest, valueFrom key and the configMapKeyRef value to read the values:

volumes:
  - name: config-volume
  configMap:
    name: special-config


VIDEO: from “Nana’s TechWorld”

Volumes of data storage

k8s-sc-pvc-pv-453x248 (credit)

Docker Containers share attached data volumes available within each Pod:

REMEMBER: Local Volumes defined in pods disappear when each pod dies.

Sample pod yaml definining the volumes mounted within its containers:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
    image: nginx
    volumeMounts:
    - mountPath: "/var/www/html"
      name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: pvc-name
   

Persistent Volume (PV)

PV’s are a cluster resource, not to a specific _____.

Admins create a Persistent Volume (PV) to provision blocks of storage (of specific Gigabit capacity sizes) for use within a specific cluster.

PV’s are like an external plugin to a cluster.

A complete list in kubernetes.io.

For a elastic-app, define several volume types in a container referencing PVC names in awsElasticBlockStore:

spec:
  containers:
  - image: elastic:latest
    name: elastic-container
    ports:
    - containerPort: 9200
    volumeMounts:
    - name: es-persistent-storage
      mountPath: /var/lib/data
    - name: es-secret-dir
      mountPath: /var/lib/secret
    - name: es-config-dir
      mountPath: /var/lib/config
  volumes:
  - name: es-persistent-storage
    persistentVolumeClain:
      claimName: es-pv-claim
  - name: es-secret-dir
    secret:
      secretName: es-secret
  - name: es-config-dir
    configMap:
      name: es-config-map
   

#### For a NFS (Network File System):

apiVersion: v1
kind: PersistentVolume
metadata:
  pv-name
spec:
  capacity:
    storage: 5 Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.0
  nfs:
    path: /dir/path/on/nfs/server
    server: nfs-server-ip-address
   

On a Google Cloud ext4 type volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: google-cloud-volume
  labels:
    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
spec:
  capacity:
    storage: 400 Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  gcePersistantDisk:
    pdName: my-data-disk
    fsType: ext4
   

Cloud Volumes (Geo-replicated)

  • AWS Elastic Block Store (EBS)
  • GCP GCE Persistent Disk
  • Azure Disk and Azure FIle

    apiVersion: v1
    kind: Pod
    metadata:
    name: azure-pod-azure
    spec:
    volumes:
    - name: data   
      azureFile:   # Azure File storage
        secretName: azure-secret
        shareName: share-name
        readOnly: false
    containers:
    - image: someimage
      name: my-app
      volumeMounts:
      - name: data
        mountPath: /data/storage
     

Alternately on Google:

    gcePersistentDisk:
      pdName: datastorage
      fsType: ext4
   

Alternately:

    awsElesticBlockStore:   # AWS EBS
      volumeID: volume_ID
      fsType: ext4
   

Storage Classes

A storage class (sc) is a type of template used to dynamically provision data storage.

Create persistent volumes dynamically:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-class-name
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "10"
  fsType: ext4
   

REMEMBER: name: storage-class-name must match PVC config storageClassName: storage-class-name

Persistant Volume Claim (PVC)

A Persistent Volume Claim (PVC) is a request for that storage by a user.

Once granted, a PVC is used as a “claim check” for the storage.

apiVersin: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-name
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: storage-class-name
   

REMEMBER: The metadata: name: in the PVC definition needs to match the Pod’s claimName: pvc-name.

Kubernetes tries to find a PV that matches the capacity: 10Gi with a compatible persistent volume in the cluster.

REMEMBER: name: storage-class-name in pod definition must match PVC config storageClassName: storage-class-name

More:

  • https://redhat-scholars.github.io/kubernetes-tutorial/kubernetes-tutorial/volumes-persistentvolumes.html
  • https://github.com/burrsutter/9stepsawesome/blob/master/9_databases.adoc

Deploy StatefulSet components

VIDEO

Stateless apps don’t keep a record of state (such as shopping cart items). Each request is completely new, without regard for what activity occured before. So they can be defined using deployment components: Standard Pods are identical and interchangeable, with the same service name, created in random order with random hashes. Data passes through NodeJs.

Each Stateful app (such as mysql-app) that stores data (updates a database such as MongoDB) about the state of each transaction are defined using Kubernetes StatefulSets (STS) components:

  • Previous State Data (in data replicas) is queried and updated depending on the data state
  • STS Pods are NOT identical. Each pods has a sticky identity, .{governing service domain}
  • STS Pods have individual service names, not interchangeable
  • STS Pods are created in sequence, after success of each Pod, based on a persistent individual identify

Add pods can read. But only Master pods can write.

To ensure each Pod maintains the latest state in local storage, continuous data sync occurs from master to slaves.

DaemonSets

daemonsets (ds)

Usually for system services or other pods that need to physically reside on every node in the cluster, such as for network services. They can also be deployed only to certain nodes using labels and node selectors.

  1. To drain a node out of service temporarily for maintenance:

    kubectl drain node3.mylabserver.com --ignore-daemonsets
  2. To return to service:

    kubectl uncordon node3.mylabserver.com

Sample micro-service apps

Bob Reselman’s 3-day hands-on classes on Kubernetes makes use of bash scripts and sample app at https://github.com/reselbob/CoolWithKube

The repo is based on work from others, especially Kelsy Hightower, the Google Developer Advocate.

  • https://github.com/kelseyhightower/app - an example 12-Factor application.
  • https://hub.docker.com/r/kelseyhightower/monolith - Monolith includes auth and hello services.
  • https://hub.docker.com/r/kelseyhightower/auth - Auth microservice. Generates JWT tokens for authenticated users.
  • https://hub.docker.com/r/kelseyhightower/hello - Hello microservice. Greets authenticated users.
  • https://hub.docker.com/r/ngnix - Frontend to the auth and hello services.

These sample apps are manipulated by https://github.com/kelseyhightower/craft-kubernetes-workshop

  1. Install
  2. Create a Node.js server
  3. Create a Docker container image
  4. Create a container cluster
  5. Create a Kubernetes pod
  6. Scale up your services

  7. Provision a complete Kubernetes cluster using Kubernetes Engine.
  8. Deploy and manage Docker containers using kubectl.
  9. Break an application into microservices using Kubernetes’ Deployments and Services.

This “Kubernetes” folder contains scripts to implement what was described in the “Orchestrating the Cloud with Kubernetes” hands-on lab which is part of the “Kubernetes in the Google Cloud” quest.

Infrastructure as code

  1. Use an internet browser to view

    https://github.com/wilsonmar/DevSecOps/blob/master/Kubernetes/k8s-gcp-hello.sh

    The script downloads a repository forked from googlecodelabs: https://github.com/wilsonmar/orchestrate-with-kubernetes/tree/master/kubernetes

    Declarative

    This repository contains several kinds of .yaml files, which can also have the extension .yml. Kubernetes also recognizes .json files, but YAML files are easier to work with.

    The files are call “Manifests” because they declare the desired state.

  2. Open an internet browser tab to view it.

    reverse proxy to front-end

    The web service consists of a front-end and a proxy served by the NGINX web server configured using two files in the nginx folder:

    • frontend.conf
    • proxy.conf

    These are explained in detail at https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-14-04-droplet

    SSL keys

    SSL keys referenced are installed from the tls folder:

    • ca-key.pem - Certificate Authority’s private key
    • ca.pem - Certificate Authority’s public key
    • cert.pem - public key
    • key.pem - private key

pod.yml manifests

An example (cadvisor):

apiVersion: v1
kind: Pod
metadata:
  name:   cadvisor
spec:
  containers:
    - name: cadvisor
      image: google/cadvisor:v0.22.0
      volumeMounts:
        - name: rootfs
          mountPath: /rootfs
          readOnly: true
        - name: var-run
          mountPath: /var/run
          readOnly: false
        - name: sys
          mountPath: /sys
          readOnly: true
        - name: docker
          mountPath: /var/lib/docker
          readOnly: true
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      args:
        - --profiling
        - --housekeeping_interval=1s
  volumes:
    - name: rootfs
      hostPath:
        path: /
    - name: var-run
      hostPath:
        path: /var/run
    - name: sys
      hostPath:
        path: /sys
    - name: docker
      hostPath:
path: /var/lib/docker
   

Labels and Selectors

app labels are specified in pods for services to reference them:

k8s-label-service-link

Sample labels and values:

  • app: myapp
  • release: stable, canary
  • environment: eve, qa, production
  • tier: frontend or backend or cache
  • team: ecommerce, auth, purchasing, marketing
  • author: name
  • maintainer: joe
  • tech-lead: name
  • application-type: ui
  • release-version: 1.0

  1. Create label automatically

    kubectl expose ...
  2. Overwrite (Add) a label after a pod created:

    k label po/helloworld app=helloworldapp --overwrite
  3. List labels for a pod created:

    k get pods --show-labels
    ... app=helloworldapp

    kubectl describe

  4. View labels using grep flags:

    k describe po mssaging | grep -C 5 -i labels

    BLAH: grep commands are simple and display extra text.

    JSONPath

    VIDEO: To precisely define extracts for processing by another command, use JSONPath:

  5. Get the IP of the pods with label app=nginx, using JSONPath:

    kubectl get pods -l app=nginx -o jsonpath='{range .items[*]}{.status.podIP}{"\n"}{end}'

    instead of

    kubectl get pods -o wide --no-headers | awk '{print $1,$6}'

    Note that JSONPath references object names which makes the request more understandable than awk referencing relative positions in output, which can change over time.

    More examples of JSONPath: https://github.com/himadriganguly/k8s-jsonpath/tree/main/pods

  6. List containers running within a pod:

    kubectl get pods [pod-name-here] -n [namespace] -o jsonpath='{.spec.containers[*].name}*
  7. Custom columns

    https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output

    Selectors

  8. show pods labeled with values matching in list of values:

    k get pods -l ‘release-version in (1.0, 2.0)’

    Label Selectors above select a set of objects using a single statement.

    ”=”, “!=”, IN, NOTIN, EXISTS are valid selectors.

  9. Delete pods

    k delete pods -l application-level=1.0

Replication rc.yml

A ReplicaSet configures a Deployment controller to create and maintain a specific version of the Pods that the Deployment specifies.

The rc.yml (Replication Controller) defines the number of replicas and

apiVersion: v1
kind: ReplicationController
metadata:
  name: cadvisor
spec:
  replicas: 5
  selector:
     app hello
  template:
    metadata:
      labels:
        app: hello-world
  spec:
    containers:
    - name: hello
      image: account/image:latest
      ports:
        containerPort: 8080
   
  1. Apply replication:

    
    kubectl apply -f rc.yml
    

    The response expected:

    replicationcontroller "hello" configured
    
  2. List, in wide format, the number of replicated nodes:

    
    kubectl get rc -o wide
    
    DESIRED, CURRENT, READY
    
  3. Get more detail:

    
    kubectl describe rc
    

Service rc.yml

The svc.yml defines the services:

apiVersion: v1
kind: Service
metadata:
  name: hello-svc
    labels:
      app: hello-world
spec:
  type: NodePort
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: hello-world
   

PROTIP: The selector should match the pods.xml.

  1. To create services:

    
    kubectl create -f svc.yml
    

    The response expected:

    service "hello-svc" created
    
  2. List:

    
    kubectl get svc
    
  3. List details:

    
    kubectl describe svc hello-svc
    
  4. List end points addresses:

    
    kubectl describe ep hello-svc
    

Deploy yml Deployment

The deploy.yml defines the deploy:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
  spec:
    containers:
    - name: nginx
      image: nginx:1.7.9
      ports:
      - containerPort: 80
        protocol: TCP
    nodeSelector:
      net: gigabit
   
...
spec:
  resources:
    requests:
      memory: "300Mi"
      cpu: "250m"  # 1/4 core
    limits:
      memory: "400Mi"
      cpu: "1000m"  # 1 core
   

Deployment wraps around replica sets, a newer version of doing rolling-update on Replication Controller. Old replica sets can revert roll-back by just changing the deploy.yml file.

PROTIP: Don’t run apt-upgrade within containers, which breaks the image-container relationship controls.

  1. Retrieve the yaml for a deployment:

    kubectl get deployment nginx-deployment -o yaml

    Notice the “RollingUpdateStrategy: 25% max unavilable, 25% max surge”.

    In the yaml, RollingUpdate is part of strategy:

    strategy:
     Rolling Update: 
    
  2. Begin rollout of a new desired version from the command line:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.8

    Alternately, edit the yaml file to nginx:1.9.1 and:

    kubectl apply -f nginx-deployment.yaml
  3. View Rollout a new desired version:

    kubectl rollout status deployment/nginx-deployment
  4. Pause Rollout to control what is included in update:

    kubectl pause deployment/nginx-deployment
  5. Describe the yaml for a deployment:

    kubectl describe deployment nginx-deployment
  6. List the DESIRED, CURRENT, UP-TO-DATE, AVAILABLE:

    kubectl get deployments 
  7. List the DESIRED, CURRENT, UP-TO-DATE, AVAILABLE:

    kubectl get deployments 

    Record Rollback history

    --record=true # to save rollback history obtained by:

    k rollout history deployment/some-deployment
  8. List the history:

    kubectl rollout history deployment/nginx-deployment --revision=3
  9. rollout (rollback) Backout the revision to a specific revision:

    kubectl rollout undo deployment/nginx-deployment --to-revision=2

    PROTIP: Notice the difference between –to-revision= and –revision=

  10. Undo rollout (rollback):

    k rollout undo deployment/my-deployment --revision=v1.2

    The default spec.revisionHistoryLimit is 10 versions retained.

Security Context

The security.yml defines a secrurity context pod:

apiVersion: v1
kind: Pod
metadata:
  name: security-context.pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  volumess:
  - name: sam-vol
    emptyDir: {}
  containers:
  - name: sample-container
    image: gcr.io/google-samples/node-hello:1.0
    volumeMounts:
    - name: sam-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
   
  1. Create the pod:

    kubectl create -f security.yaml

    This can take several minutes.

  2. Bring up a shell the security context pod:

    kubectl exec -it security-context-pod -- sh
  3. Bring up shell and execute shell command (such as ls, ps aux to see processes):

    kubectl exec -c container id -it pod name -- command 

    -c if there are several containers in a pod.

  4. See that the group is “2000” as specified:

    cd /data && ls -al
  5. Exit the security context:

    exit
  6. Delete the security context:

    kubectl delete -f security.yaml

Kubelet Daemonset.yaml

Kubelets instantiate pods – each a set of containers running under a single IP address, the fundamental units nodes.

A Kubelet agent program is installed on each server to watch the apiserver and register each node with the cluster.

PROTIP: Use a DaemonSet when running clustered Kubernetes with static pods to run a pod on every node. Static pods are managed directly by the kubelet daemon on a specific node, without the API server observing it.

  • https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected.

Deleting a DaemonSet will clean up the Pods it created. Some typical uses of a DaemonSet are:

  • running a cluster storage daemon, such as glusterd, ceph, on each node.
  • running a logs collection daemon on every node, such as fluentd or logstash.
  • running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia gmond.
  1. Start kubelet daemon:

    
    kubelet --pod-manifest-path=the directory 
    

    This periodically scans the directory and creates/deletes static pods as yaml/json files appear/disappear there.

    Note: Kubelet ignores files starting with a dot when scanning the specified directory.

    PROTIP: By default, Kubelets exposes endpoints on port 10255.

    Containers can be Docker or rkt (pluggable)

    /spec, /healthz reports status.

The container engine pulls images and stopping/starting containers.

  • https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

CNI Plugins

The Controller Network Interface (CNI) is installed using basic cbr0 using the bridge and host-local CNI plugins.

The CNI plugin is selected by passing Kubelet the command-line option:

   --network-plugin=cni 
   

See https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/

Plugin- vxlan L2L3PolEncrypt
Project Calico Y -Y-Y
Calico with Canal Y Y-YY
Flannel Y Y---
Weave Works (Weave Net) Y Y-YY
Romana - -YY-
Kube Router - -YY-
Kopeio Y Y--Y

Others:

  • Cisco ACI
  • Cilium
  • Contiv
  • Contrail
  • NSX-T
  • OpenVswitch

Make your own K8s

Kelsey Hightower, in https://github.com/kelseyhightower/kubernetes-the-hard-way, shows the steps of how to create Compute Engine yourself:

  • Cloud infrastructure firewall and load balancer provisioning
  • setup a CA and TLS cert gen.
  • setup TLS client bootstrap and RBAC authentication
  • bootstrap a HA etcd cluster
  • bootstrap a HA Kubernetes Control Pane
  • Bootstrap Kubernetes Workers
  • Config K8 client for remote access
  • Manage container network routes
  • Deploy clustesr DNS add-on

Kubeflow

https://github.com/kubeflow/kubeflow makes deployment of Kubernetes for Machine Learning (TensorFlow) using Kafka

AWS K8s Cluster Auto-scaler

https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md provides deep-dive notes and code.

References

by Adron Hall:

Julia Evans

  • https://jvns.ca/categories/kubernetes/

Drone.io

http://www.nkode.io/2016/10/18/valuable-container-platform-links-kubernetes.html

https://medium.com/@ApsOps/an-illustrated-guide-to-kubernetes-networking-part-1-d1ede3322727

https://cloud.google.com/solutions/heterogeneous-deployment-patterns-with-kubernetes

https://cloud.google.com/solutions/devops/

https://docs.gitlab.com/ee/install/kubernetes/gitlab_omnibus.html

https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html

https://devops.college/the-journey-from-monolith-to-docker-to-kubernetes-part-1-f5dbd730f620

https://github.com/ramitsurana/awesome-kubernetes

Jobs for you

Kubernetes Dominates in IT Job Searches

Learning, Video and Live

Kubernetes for Beginners by Siraj Jan 8, 2019 [11:04]

Kubernetes Deconstructed Dec 15, 2017 [33:14] by Carson Anderson of DOMO (@carsonoid)

Solutions Engineering Hangout: Terraform for Instant K8s Clusters on AWS EKS by HashiCorp

Introduction to Microservices, Docker, and Kubernetes by James Quigley

Kubernetes in Docker for Mac April 17, 2018 by Guillaume Rose, Guillaume Tardif

YOUTUBE: What is Kubernetes? Jun 18, 2018 by Jason Rahm

Kubernetes for Machine Learning

This article talks about Jupyter notebooks correctness and functionality being dependent on their environment, called “training serving skew”. To get around that, use the Binder service which takes Jupyter notebooks within a Git repository to build a container image, then launches the image in a Kubernetes cluster with an exposed route accessible from the public internet.

OpenShift’s Source-to-image (S2I) and Graham Dumpleton’s OpenShift S2I builder builds artifacts from source and injects them into docker images.

It’s used by Seldon-Core to scale Machine Learning environments. There are Seldon-Core Examples

Seldon-Core is used by Kubeflow makes deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. It provides templates and custom resources to deploy TensorFlow and other machine learning libraries and tools on Kubernetes. Included in Kubeflow is JupyterHub to create and manage multi-user interactive Jupyter notebooks. It began as TensorFlow Extended at Google.

https://github.com/kubernetes-incubator is a collection of repositories such as the spartakus Anonymous Usage Collector, metrics-server, external-dns which configures external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services, and kube-aws which is a command-line tool to declaratively manage Kubernetes clusters on AWS.

https://radanalytics.io Oshinko empowers intelligent app developement on the OpenShift platform deploying and managing Apache Spark clusters It has a spark cluster management app (oshinko-webui)

Resources

8 Lightboard VIDEOS: Understanding Kubernetes series by VMware.

https://github.com/hjacobs/kubernetes-failure-stories

Kubstack

Daniel Pacak’s experience with CKAD (from Aqua Security)

@pst418

GCP PODCAST: Kubernetes and Google Container Engine hosts Francesc Campoy Flores and Mark Mandel interview Brian Dorsey, Developer Advocate, Google Cloud Platform. Comments at r/gcppodcast

O’Reilly book Kubernetes adventures on Azure, Part 1 (Linux cluster) Having read several books on Kubernetes, Ivan Fioravanti, writing for Hackernoon, says it’s time to start adventuring in the magical world of Kubernetes for real! And he does so using Microsoft Azure. Enjoy the step-by-step account of his escapade (part 1).

Microsoft’s “PDF: 50 days from zero to hero with Kubernetes” includes:

  1. Phippy Goes to the Zoo is a children’s book character Phippy (from Docker) introduct pods, replica sets, deployments, ingress.

  2. The 6-part YouTube videos by Brendan Burns drawing behind glass.

  3. Kubernetes core concepts for Azure Kubernetes Service (AKS) explore basic concepts like YAML definitions, networking, secrets, and application deployments from source code.

  4. Katacoda provides a Bash terminal as if you are running Minikube and kubectl locally just by clicking the code on the left pane rather than typing.

  5. Microservices architecture on Azure Kubernetes Service (AKS) describes a reference implementation at https://github.com/mspnp/microservices-reference-implementation

  6. https://aksworkshop.io/ is a hands-on workshop to create a Kubernetes cluster, deploy a microservices-based application, and set up a CI/CD pipeline.

    • Kubernetes deployments, services and ingress
    • Deploying MongoDB using Helm
    • Azure Monitor for Containers, Horizontal Pod Autoscaler and the Cluster Autoscaler
    • Building CI/CD pipelines using Azure DevOps and Azure Container Registry
    • Scaling using Virtual Nodes, setting up SSL/TLS for your deployments, using Azure Key Vault for secrets

  7. https://azure.microsoft.com/en-us/topic/what-is-kubernetes

  8. https://aka.ms/k8slearning

  9. A visual guide on troubleshooting Kubernetes deployments DECEMBER 2019

  10. https://coreos.com/blog/kubectl-tips-and-tricks

    VIDEO from Jun 22, 2017 Covers bash completion

A cgroup (control group) is a group of Linux processes with optional resource isolation, accounting, and limits.

Secrets

  • kubernetesbyexample.com: Secrets
  • In https://kubernetes.io/docs/concepts/secret/#best-practices
  • https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data Enable encryption at rest for cluster data

Banzai cloud vault uses a mutating admission webhook to inject an executable into containers inside Pods, which then request secrets from Hashicorp Vault through special environment variable definitions. This project was inspired by a number of other projects (e.g. channable/vaultenv, hashicorp/envconsul), but one thing that makes it unique is that it is a daemonless solution.

Base64 Encoding

What Kubernetes calls its secrets are actually Base64 encoded text.

PROTIP: custom controller turn proxies into Secrets. sealed secrets:

  • Bitnami’s Secret Controller has a key in the Controller used to do asymmetric encrypt and decrypt of external secrets stored in Git.
  • AWS Secrets Manager (ASM)

  1. Encode (not encrypt) plain text to base64 encoding using program within coreutils that comes with macOS/Linux operating systems:

    echo -n 'supersecret' | base64 > encoded_file ; cat encoded_file

    (echo -n removes invisible new line characters for conversion)

    c3VwAXJzZWNyZXQ=

  2. Decode a base64 encoded file to text:

    base64 --decode encoded_file
  3. Create a secret from a text literal and store in K8s:

    k create secret generic my-secret-literal \
     --from-literal=my-password
    
    k create secret generic my-db-password \
     --from-literal=db-password='password'
     --from-literal=db-root-password='password'
    
  4. Create a secret keypair and store in K8s:

    k create secret generic my-secret-file \
     --from-file=ssh-privatekey=~/.ssh/id_rsa
     --from-file=ssh-publickey=~/.ssh/id_rsa.pub
    
  5. Create a secret from a keypair:

    k create secret tls tls-secret  \
     --cert=path/to/tls.cert
     --key=path/to/tls.key
    

Secrets - custom controllers

REMEMBER: Pods consume static ConfigMaps and Secrets.

PROTIP: To monitor for changes apply updates to hash in PodSpec, then triggers changes: install custom controller “Wave” at https://github.com/pusher/wave.

  1. Use encoded secret (saved insecurely encoded in Base64):

    apiVersion: v1
    kind: Secret
    metadata
      name: database-secrets
    type: Opaque
    data:
      DB_PASSWORD: "c3VwAXJzZWNyZXQ="   # encoded Base64 (not secret)
    volumes:
             - name: database-secrets
      secrets:
     secretName: database-secrets
    
    • Encrypting data on rest: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
    • Using Sealed Secrets that allow us to encrypt everything in Git: https://github.com/bitnami-labs/sealed-secrets
    • Using Vault to store them: https://github.com/coreos/vault-operator

PROTIP: K8s stores secrets in memory (tempfs on a Node, not on disk) in etcd (which should be limited to admin users).

TOOL: “Studio 3T” to connect to MongoDB.

Debugging

K8s does not come with debuggers. Output to logs, then use tracing. Printlines.

DatadogHQ.com for metrics & traces

unu uses Jaeger for auto-instrumentation

Mindspace.net provides IDE connecting to node remote debugging.

cluster-api.sigs.k8s.io printlines

KubeMonkey is a Chaos Monkey forcing random failures within Kubernetes – to test the fault tolerance of our deployments.


K8s on Raspberry Pi

Scott Hanselman built Kubernetes on 6 Raspberry Pi nodes, each with a 32GB SD card to a 1GB RAM ARM chip (like on smartphones).

Hansel talked with Alex Ellis (@alexellisuk) keeps his instructions with shell file updated for running on the Pis to install OpenFaaS.

CNCF Ambassador Chris Short developed the rak8s (pronounced rackets) library to make use of Ansible on Raspberry Pi.

Others:

  • https://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/
  • https://blog.sicara.com/build-own-cloud-kubernetes-raspberry-pi-9e5a98741b49


Blogs

https://kubernetesbyexample.com provides in-depth yet concise coverage, with sample code:

IBM’s Kubernetes 101 is an excellent overview.

From zero to CKAD in 30 days August 9, 2020 by Pranam Mohanty

https://lnkd.in/f3BciG5

Sandeep Dinesh (@sandeepdinesh) from 2018

  • https://medium.com/google-cloud/kubernetes-best-practices-season-one-11119aee1d10
  • https://www.youtube.com/playlist?list=PLIivdWyY5sqL3xfXz5xJvwzFW_tlQB_GB

Observability

Burr Sutter (burrsutter.com) As a Red Hat employee:

Alex Soto (lordofthejars.com)

  • https://github.com/redhat-scholars/kubernetes-tutorial

https://itnext.io/bootstrapping-kubernetes-clusters-on-aws-with-terraform-b7c0371aaea0 using kubeadm on AWS

Production

Google on Coursera has a video course Architecting with Google Kubernetes Engine: Production by Maya Kaczorowski (Product Manager, Container Security).

Every operation on a GCP resource is performed using an API call for which accesses is controlled using a permission.

OpenID Connect to API server operates on top of OAuth, safer than x509 certs. Windows AD servers can sync one-way via Google Cloud Directory Sync (GCDS) by grouping GCP permissions into roles based on common user flows. NOTE: Permissions can’t be individually assigned to members,

Get a G-Suite Domain or Cloud Identity domain (free).

Cloud IAM policy grants roles to users. Cloud IAM defines a list of bindings designating which members can view or change GKE cluster configurations.

An IAM policy can be attached to a specific resource, a project, a project folder, or a whole organization.

Inside the cluster, K8s RBAC, Pod Security.

Access control can be setup at any level within the GCP organizational hierarchy and choose the most appropriate level for each IAM policy. Within an organization, you can have multiple folders containing multiple projects and so on.

Cloud IAM policies applied at higher levels of a GCP organizational hierarchy are inherited by resources lower down that hierarchy. An IAM policy attached at the organizational level will automatically have access to all folders, all projects, and ultimately all relevant resources. There’s no way to grant a permission at higher level in the hierarchy and then take it away below.

So, in general, the policies applied at higher levels should grant very few permissions and policies applied at lower levels should grant additional permissions to only those who need them.

There are three kinds of roles in cCloud IAM: primitive, predefined, and custom. Primitive roles grant users global access to all GCP resources within a project (app engine, compute engine, and cloud storage). They existed before Cloud IAM, but can still be used with Cloud IAM. The three primitive roles: viewer role permits read-only actions, such as viewing existing resources or data across the whole project. editor role adds modifying of existing resources. owner role adds the right to manage roles and permissions and set up billing for a project.<a target=”_blank” href-“https://googlecoursera.qwiklabs.com/focuses/13134687?parent=lti_session”>*</a>

kubectl apply -f pod-reader-role.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: production
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create", "get", "list", "watch"]
   

GKE provides several Predefined roles to provide granular access to Kubernetes engine resource. GKE viewer role gives read-only access as might be needed for auditing. GKE developer role grants developers and release engineers full control to all resources within a cluster. GKE admin role gives project owners, system administrators, and on-call engineers full access to clusters and Kubernetes engine resource is inside the clusters (create, delete, update, view clusters), but provides no access to Kubernetes resources.

GKE custom roles provides even more granular control to a specific user account managing software running inside a certain GKE cluster, but not have any access to view GCP resources, and nothing else.

Video Courses

Coursera’s “Architecting with Google Kubernetes Engine Specialization” is focused on building efficient computing infrastructures using Kubernetes and Google Kubernetes Engine (GKE). The specialization introduces participants to deploying and managing containerized applications on GKE and the other services provided by Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as pods, containers, deployments, and services; as well as networks and application services. The specialization also covers deploying practical solutions including security and access management, resource management, and resource monitoring.

  1. Google Cloud Platform Fundamentals: Core Infrastructure

    This course introduces you to concepts and terminology for working with Google Cloud Platform (GCP). You learn about, and compare, many of the computing and storage services available in Google Cloud Platform, including Google App Engine, Google Compute Engine, Google Kubernetes Engine, Google Cloud Storage, Google Cloud SQL, and BigQuery. You learn about important resource and policy management tools, such as the Google Cloud Resource Manager hierarchy and Google Cloud Identity and Access Management. Hands-on labs give you foundational skills for working with GCP.

  2. Architecting with Google Kubernetes Engine: Foundations reviews the layout and principles of Google Cloud Platform, followed by an introduction to creating and managing software containers and an introduction to the architecture of Kubernetes.

  3. Architecting with Google Kubernetes Engine: Workloads by Alex Hanna. Covers: GKE Cluster; Deployments;Jobs and Cronjobs; Cluster Scaling; Pod placement; Pod Autoscaling and Node Pools; Pod networking; Services, Ingress; Load balancing; Network security; Volumes, Stateful Sets; ConfigMaps; Secrets; Persistent Data;

    To create a cluster with autoscaling:

    gcloud container clusters create cluster-name --num-nodes 30 \
    --enable-autoscaling --min-nodes 15 --max-nodes 50 --zone comput-zone 

    To scale nodes in a cluster node pool:

    gcloud container clusters resize projectdemo --node-pool default-pool --size 6

    To disable auto-scaling:

    ... --no-enable-autoscaling ...
  4. Architecting with Google Kubernetes Engine: Production

Autoscaler

  • https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/proposals/autoscaling.md now obsolete
  • https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md
  • https://www.tutorialspoint.com/kubernetes/kubernetes_replica_sets.htm
  • resize the amount of CPU/RAM for a specific Pod or Container. https://github.com/kubernetes/kubernetes/issues/2072

References

K8s failure stories at k8s.af

K8s experts Fairwinds.com has https://github.com/FairwindsOps/apprentice-learning-plan for new Site Reliability Engineers. Fairwinds also has open-source tools at their FairwindsOps GitHub using @goreleaser:

k8s-school.fr:

  • https://k8s-school.fr/resources/en/blog/kubectl-run-deprecated/

LevelUpEducation:

  • https://github.com/LevelUpEducation/kubernetes-demo/issues/31

https://www.tutorialspoint.com/kubernetes/kubernetes_replica_sets.htm

Exam by Brad McCoy

https://www.linkedin.com/pulse/effectively-choosing-k8-node-size-capacity-anurag-gupta/

https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options

  18. Cloud services comparisons (across vendors)
  19. Cloud regions (across vendors)
  20. AWS Virtual Private Cloud

  21. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  22. Azure Certifications
  23. Azure Cloud

  24. Azure Cloud Powershell
  25. Bash Windows using Microsoft’s WSL (Windows Subystem for Linux)
  26. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  27. Azure Networking
  28. Azure Storage
  29. Azure Compute
  30. Azure Monitoring

  31. Digital Ocean
  32. Cloud Foundry

  33. Packer automation to build Vagrant images
  34. Terraform multi-cloud provisioning automation
  35. Hashicorp Vault and Consul to generate and hold secrets

  36. Powershell Ecosystem
  37. Powershell on MacOS
  38. Powershell Desired System Configuration

  39. Jenkins Server Setup
  40. Jenkins Plug-ins
  41. Jenkins Freestyle jobs
  42. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  43. Docker (Glossary, Ecosystem, Certification)
  44. Make Makefile for Docker
  45. Docker Setup and run Bash shell script
  46. Bash coding
  47. Docker Setup
  48. Dockerize apps
  49. Docker Registry

  50. Maven on MacOSX

  51. Ansible

  52. MySQL Setup

  53. SonarQube & SonarSource static code scan

  54. API Management Microsoft
  55. API Management Amazon

  56. Scenarios for load
  57. Chaos Engineering