Wilson Mar bio photo

Wilson Mar

Hello!

Calendar YouTube Github

LinkedIn

The path of maturity in organizations going faster via DevSecOps

US (English)   Norsk (Norwegian)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Estonian   اَلْعَرَبِيَّةُ (Egypt Arabic)   Napali   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

Here is my observation about a pattern of technology adoption over time for CI/CD (Continuous Integration/Continuous Delivery/Deployment). Some may call this levels of “maturity”. But you can use it as a road-map for both skill-building and realizing advantages from “DevOps” automation.

NOTE: Content here are my personal opinions, and not intended to represent any employer (past or present). “PROTIP:” here highlight information I haven’t seen elsewhere on the internet because it is hard-won, little-know but significant facts based on my personal research and experience.

  1. OS Package managers
  2. Language-based package managers
  3. Source repositories
  4. Binary repositories
  5. App build scripts
  6. Automated job runs
  7. Builds on deploy server
  8. Deploy Server configuration management
  9. Services virtualization
  10. Secrets Vault
  11. Unit tests
  12. Code quality checks
  13. Team-level system tests
  14. Code Coverage

  15. Alerts
  16. Predictions based on previous metric values
  17. Metrics dashboard and retention
  18. System level integration tests
  19. GUI user-level acceptance tests
  20. Server monitoring
  21. Regression testing
  22. Continuous deployment to production
  23. Change aceleration

  24. Git secrets scanning
  25. SCA (Software Component Analysis)
  26. SAST (Source-code Application Security Testing)
  27. DAST (Dynamic Application Security Testing)

There are a lot of moving pieces behind “one click deploys”. And every organization has a different architecture. This diagram (translated from Japanese) is one example, but it contains many of the pieces:

ci-cd-workflow-1308x894.png

This flow above (from 2016) has Git and Jenkins sending tracking info to a Confluence JIRA issue tracking system. Intermediate Docker images are stored in Artifactory.


1. OS Package managers

Installing computer programs onto a computer’s operating system has been done manually by downloading and invoking GUI programs, then clicking “Next”, etc.

With this approach, manual effort is also needed to configure, remove, and upgade versions. That’s a hassle.

Replacement of this manual effort with automation is the “heart” of technologies for Continuous Integration.

Each operating system has its own way of automating dependencies to custom code. Different flavors of Linux machines each come with their own manager:

  • Chocolatey.org for Windows exposes NuGet packages with a simple command like:

    choco install make
     
  • Homebrew for MacOS, which has commands such as:

    brew install make
     
  • apt-get install make for Debian and Ubuntu,

  • yum install make for Red Hat, CentOS

Package managers can automatically update ALL libraries installed by a single command:

brew update

2. Language-based packages

Each computer language has a different approach to obtaining dependencies of code. Each retrieves modules from a public repository.

The JavaScript based Node framework has the NPM (Node Package Manager).

PROTIP: Greenkeeper identifies changes in dependencies in npm packages on GitHub.

Java has Maven referencing http://search.maven.org/#browse with alternatives in Ant Ivy

Ruby has RPM (Ruby Package Manager) managing Ruby Gems.

Python has PIP and https://pypi.python.org/pypi

3. Source repositories

When Microsoft and Google both abondon their own repositories and move their open-source code to GitHub, it’s pretty clear that GitHub outright “owns” public source repositories.

More precisely, the Git repository format is the international standard also used in competitors of GitHub – GitLab, etc. rather than previous tools such as SubVersion.

The “Configuration as code” movement within DevOps is pressuring organizations to store server configuration code using the same Git mechanism as app developers.

All this is so changes to human-readable text is identified by who made specific changes, when the change occurred, and why (as described in commit messages).

Use of a versioned repository enable the team to back out a release and roll back to a previous release if something goes horribly wrong.

Also, when a developer retrieves team-level code from the team repository (using a Git fetch or pull command), Git automatically identifies whether several people have worked on the same lines in the same file.

When developers commit their work to a team repository in small increments of working code, the team’s actual progress can be measured in those increments. This may save time reporting progress. Those who want to know can simply view a dashboard.

4. Binary repositories

For files meant to be read by computers, such as those created by compilation and builds (Windows .dll files) and graphics files, there is Artifactory and Nexus.

They expose security-vetted editions of publicly available installers like Homebrew, but internally.

5. Build scripts

When an individual developer works alone on a Java package, a file written to control the Make or Ant build tool to perform builds saves a few seconds versus typing commands.

But more importantly than saving time, because modern apps are tested with a specific version of many dependencies, a pom.xml file specifies all them, for consistency.

pom.xml configuration files are stored along with code in a Git repository.

To invoke Make to process the pom.xml file, there is one command:

make
   

The default actions are clean and install, which means to download and expand the correct version of all dependencies noted in the file.

6. Scheduled jobs on deploy server

When developers work together, a separate server running build scripts avoids disturbing individual workers’s laptops.

Running build scripts nightly ensures that not much time goes by before catching issues in builds. The simplest integration test is whether Java or C# code can all compile and be assembled into a final executable.

Such runs can be simply initiated on a nightly schedule by a cron utility program that comes with Linux operating systems.

Scheduled smoke tests are also useful to determine whether applications can still be used, such as sign-in and out. These run early in the morning provide “early warning” for people to take remedial action before others begin work depending on the app.

Scripts running every few minutes provide constant activity ensure that processes remain in the server’s memory and thus avoid delays for the first person who signs in early in the day.

7. Deploy Server Configuration Management

On the server, specific versions of various packages (from JVM to applicaton servers) need to be installed in a specific order.

That is the job of configuration management scripts processed by tools such as Ansible, Chef, Puppet, Salt.

  • https://www.youtube.com/watch?v=OmRxKQHtDbY
  • https://www.youtube.com/watch?v=XJpN8qpxWbA&t=1859s
  • https://www.youtube.com/watch?v=2H95tx7Fuv4

8. Automated jobs

The introduction of a continuous integration server (such as Jenkins, Bamboo, Travis-CI, Circle-CI, etc.) enable event rather than schedule-driven kick off of build automation.

A typical event is when new code is committed to the version control system. The mechanism is called a “web hook” in GitHub/GitLab.

Jenkins build software provides more features to automate jobs than simple cron processes.

In addition to doing builds, Jenkins can kick off test automation if a build is successful. Such a mechanism is called “post-build actions” within Jenkins.

PROTIP: Jenkins can be configured to stand-up entire environments down-stream immediately when a build is successful. This is an improvement over waiting for development work to be done before provisioning system testing servers. This is made possible by the availability of cheaper servers and automation.

Jenkins has dozens of add-ons to perform additional work.

9. Services Virtualization

QUESTION: How many service dependencies will programs be calling?

QUESTION: Who can provide information about those dependencies?

When servers providing services become unavailable, developers must wait. When servers providing services get updated, those assuming a previous version must stop and update.

Do those services have associated virtualization programs?

Service virtualization (SV) software provide “mock” service end-points so caller programs can keep working.

There are several providers of SV packgaes: CA LISA, HP/MicroFocus, Perforce, etc.

10. Secrets Vault

For convenience, developers leave passwords in source code to avoid typing them in all the time. Secrets are needed to call database and APIs.

But passwords should not be stored openly in public repositories. Hackers use “dorking” scripts to scan code repositories for passwords.

HashiCorp created Vault.io to provide secure access to secrets, in a unified way. It takes care of secure storage with detailed audit logs as well as key rolling (leasing and renewal) and revocation.

The “Jenkins Vault” plug-in enables Jenkins jobs to obtain secrets.

11. Unit tests

Developers who favor the “TDD” (Test Driven Development) approach begin by writing tests to check whether code returns output expected.

The first lines coded of a module may be error checking code that returns negative responses.

Initially, the test would fail because the code has not yet been written.

But when working code is added that return positive results, that code can be counted as “done”.

This approach enables a tenant of an Agile programming principle of measuring completion by working software rather than just the number of lines of code written.

In the Java world, JUnit is the de facto standard, although TestNG is also popular. For C# applications, the NUnit testing framework proposes similar functionalities to those provided by JUnit, as does Test::Unit for Ruby. For C/C+ +, there is CppUnit. PHP developers can use PHPUnit.

Because “xUnit” tools all create result reports in XML format, they can be displayed using the same “xUnit Plugin” for Jenkins.

Jenkins makes the distinction between failed builds and unstable builds. Failed builds are indicated by a red ball. Unstable builds not considered of sufficient quality are inicated by a yellow ball.

12. Code quality checks

Developers are increasingly adding use of code scanners such as SonarQube to automatically ensure that all code complies with rules. Software named CAST analyzes code for the system as a whole.

Note even with software scans, there is still value in a team talking about each other’s work.

However, automation enables the team to focus on more substantive topics because each individual can deal on their own with repetitive issues that can be identified automatically.

13. Team-level system tests

Even when code does not directly conflict with another developer’s work, running programs may conflict with others who share memory or packages or other resources.

Running individual unit tests along with other unit tests may reveal conflicts.

For example, one integration conflicts between components is when an element is removed in the receiver code but not in the caller code of an API call.

14. Code Coverage

Continuous integration depend on a high percentage of code coverage to ensure that “everything” works prior to deployment.

There are apps such as SonarQube which identify whether test code covers (cause to execute) specific lines of code.

15. Alerts

To notify people when builds fail, Jenkins can can be configured to send out emails which may contain logs and reports from builds and test jobs.

Jenkins has a wide range of ready-built APIs to call which use more proactive channels such as Instant Messaging.

Such mechanisms are often used to run “smoke” tests to detect whether a system is still working so that people can be alerted to troubleshoot before too much time goes by.

But an organization cannot rely on alerts alone.

16. Predictions based on previous metric values

Performance or “speed” tests are often included to measure the response time for a single simulated user. Such metrics for various transactions in the program are maintained over time so that developers are alerted if response times suddenly slower than historical averages.

This provides an early warning system for possible issues before additional labor is spent on an architecture that needs re-design.

Tools such as PagerDuty are used to specify escalation points of contact for each application.

17. Metrics dashboard and retention

By default, Jenkins maintains a history of builds.

Annotations, such as the origin of test failures and how to fix them, can be added to a job using the Edit Description link in the top right-hand corner of the Jenkins screen.

However, Jenkins does not aggregate a group of jobs for display in a single dashboard.

A dedicated computer monitor dedicated to such constant display (like at an airport) is often called a “build radiator”.

Some dashboards consolidate measures of efficiency and effectiveness into a single metric.

18. System-level integration tests

While individual unit tests typically use static (unchanging) data, tests of whether individual components “integrate” with each other tend to use more dynamic data which change during a test run.

Tests of API (Application Programming Interfaces) using “microservices” are conducted at this level.

This step requires the team to identify

19. User-level acceptance tests

Automation of tests that focus on what end-users do are called Behavior-Driven Development (BDD) and Acceptance-Test Driven Development (ATDD). Examples of tools for this are Cucumber, fitnesse, jbehave, rspec, easyb, etc.

The end-user focus also enable the automation that proves what user features have been implemented, and which remain to be done.

Stress and load testing are done at this level.

20. Server monitoring

Measurement of server status and resource usage is necessary especially during stress and load tests.

Software in this category include Nagios, Splunk, AppDynamics, New Relic, etc.

Trends in measurement history over time should be analyzed to determine trends which may impact capacity.

21. Regression testing

Regression testing ensure that everything still works after changes are made.

Investments in automation are returned in savings from wasted labor repeating tests during each cycle of change.

Individual xUnit tests can be temporarily deactivated using the @Test(enabled=false) annotation.

22. Continuous Deployment to production

The use of build and test automation enables a team to have confidence that potential defects can be caught before appearing in production use by end-users (customers).

Trust in automation is what enables continuous deployment directly into production.

But to those comfortable to a traditional waterfall approach, quick movement into production is one of many steps where action can stagnate.

Comprehensive tests enable a “fail fast, recover fast” capability that remove the fear from the experimentation necessary to increase innovation.

The availability of a vast number of servers available instantly from a public cloud vendor means that entire stacks of servers can be stood up such that several stacks can operate production loads at the same time.

This makes it possible to quickly switch between one version to another, even after deployment. Many call this the “blue/green” strategy.

### Kubernetes vs. Helm

Both describe and maintain Kubernetes objects as code. Both use variables for overwriting at file, command line levels. Both allow installation from multiple sources such as local directories and git repositories.

AspectTerraformDocker Helm
Formatjson/hcl file formatstandard manifests with Go-templates
supports environment variables -
Modularity:modules sub-charts
Packages:Module Registry stable and incubator charts
Dry run:plan subcommand –dry-run flag
t- -
t- -

https://docs.gitlab.com/ee/install/kubernetes/gitlab_omnibus.html

23. Change acceleration

PROTIP: Automation described above accelerate what people can do manually. Such is measured by how quickly changes go from concept through various environments and finally into production.

This enables business agility – the ability to respond more quickly to changes in market forces and customer needs.

Organizations that move quicker than its competitors are at a significant, fundamental competitive advantage.

So investment need to be driven from the corporate top-level down and fostered from the bottom-up.

This means not just posters and team-building retreats, but professionally designed change management programs, technical hackathons, and training for directors and program managers as well as technicans.

“you can’t buy devops. you have to sell it”. To everyone.

So working out the psychological and political issues is, in my experience, are far more important to actual adoption success than technical and financial factors.

24. Secrets Scanning

25. Software Component Analysis

SCA tools look inside Docker container images to identify if open-source components referenced contain vulnerabilities identified among CVE’s released from the U.S. National Vulnerability Database (NVD). The analysis is on the entire hierarchy of dependencies: for each component referenced, the components each component referenced is also analyzed, and so on.

Some SCA vendors, such as JFrog, also include proprietary analysis.

  • Artifactory JFrog adds from “VulDB”.

The analysis also include license-related vulnerabilities, such as components with risky license designations (such as no license at all). An example from JFrog Xray summary of analysis within a Manifest.json file:

xray-licenses-count

Vendors in the SCA market were identified in Forrester’s 2021 market analysis, summarized by this:

devsecops-sca-forrester-2021-1200x1538

26. Static Application Security Testing

  • Fortify from Micro Focus

27. DAST (Dynamic Application Security Testing)

  • Fortify from Micro Focus

More on DevOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options
  18. AWS Load Balancers

  19. Cloud services comparisons (across vendors)
  20. Cloud regions (across vendors)
  21. AWS Virtual Private Cloud

  22. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  23. Azure Certifications
  24. Azure Cloud

  25. Azure Cloud Powershell
  26. Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
  27. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  28. Azure Networking
  29. Azure Storage
  30. Azure Compute
  31. Azure Monitoring

  32. Digital Ocean
  33. Cloud Foundry

  34. Packer automation to build Vagrant images
  35. Terraform multi-cloud provisioning automation
  36. Hashicorp Vault and Consul to generate and hold secrets

  37. Powershell Ecosystem
  38. Powershell on MacOS
  39. Powershell Desired System Configuration

  40. Jenkins Server Setup
  41. Jenkins Plug-ins
  42. Jenkins Freestyle jobs
  43. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  44. Docker (Glossary, Ecosystem, Certification)
  45. Make Makefile for Docker
  46. Docker Setup and run Bash shell script
  47. Bash coding
  48. Docker Setup
  49. Dockerize apps
  50. Docker Registry

  51. Maven on MacOSX

  52. Ansible
  53. Kubernetes Operators
  54. OPA (Open Policy Agent) in Rego language

  55. MySQL Setup

  56. Threat Modeling
  57. SonarQube & SonarSource static code scan

  58. API Management Microsoft
  59. API Management Amazon

  60. Scenarios for load
  61. Chaos Engineering

This is one of a series on Git and GitHub:

  1. Git and GitHub videos

  2. Why Git? (file-based backups vs Git clone)
  3. Git Markdown text

  4. Git basics (script)
  5. Git whoops (correct mistakes)
  6. Git messages (in commits)

  7. Git command shortcuts
  8. Git custom commands

  9. Git-client based workflows

  10. Git HEAD (Commitish references)

  11. Git interactive merge (imerge)
  12. Git patch
  13. Git rebase

  14. Git utilities
  15. Git-signing

  16. Git hooks
  17. GitHub data security
  18. TFS vs GitHub

  19. GitHub actions for automation JavaScript
  20. GitHub REST API
  21. GitHub GraphQL API
  22. GitHub PowerShell API Programming
  23. GitHub GraphQL PowerShell Module