Wilson Mar bio photo

Wilson Mar

Hello. Hire me!

Email me Calendar Skype call 310 320-7878

LinkedIn Twitter Gitter Google+ Youtube

Github Stackoverflow Pinterest

How organizations go faster


Overview

Here is my observation about a pattern of technology adoption for CI/CD. Some may call this levels of “maturity”. But you can use it as a road-map for realizing advantages from “DevOps” automation.

  1. Package managers
  2. Deploy script
  3. Server configuration management
  4. Nightly builts on deploy server
  5. Unit tests
  6. Code repositories
  7. Team-level system tests
  8. Code quality checks
  9. Automated runs
  10. Code Coverage
  11. Alerts
  12. Predictions based on previous metric values
  13. Metrics dashboard and retention
  14. System level integration tests
  15. User-level acceptance tests
  16. Server monitoring
  17. Regression testing
  18. Continuous deployment to production
  19. Change aceleration

1. Package managers

An application running on a server can require dozens of dependencies. To avoid “reinventing the wheel” with custom coding, there are package managers for logging, authentication, etc. both as programs run by the operating systems and as packages within source code.

But it’s a chore to keep them updated.

So different flavors of Linux machines each come with their own manager: apt-get for Red Hat and Ubuntu, yum for Debian, homebrew for MacOS, etc. A sample command is:

brew install make
   

For Windows machines, there is NuGet. Chocolatey.org exposes NuGet packages with a simple command like Homebrew does:

choco install make
   

Larger organizations make use of Nexus or Artifactory, repositories of binary files such as .dll’s and .war files created during compilation). They expose security-vetted editions of publicly available installers like Homebrew, but internally.

Package managers automatically update all the libraries installed by a single command, such as:

brew update
   

2. Deploy script

When an individual developer works alone, a file written to control the Make or Ant build tool to perform builds more than saves a few seconds versus typing commands.

Modern apps require specific versions of many dependencies that a file needs to be specify all them, for consistency.

Such files are usually stored along with code in a Git repository.

To invoke a Make file to download the correct version of all dependencies, there is one command:

make
   

3. Server Configuration Management

On the server, specific versions of various packages (from JVM to applicaton servers) need to be installed in a specific order.

That is the job of configuration management scripts processed by tools such as Ansible, Chef, and Puppet.

4. Nightly builds on deploy server

When developers work together, a separate server running build scripts avoids disturbing individual workers’s laptops.

Running build scripts nightly ensures that not much time goes by before catching issues in builds. The simplest integration issue is whether Java or C# code can all compile and be assembled into a final executable.

Such runs can be initiated on a nightly schedule by a cron utility program that comes with Linux operating systems.

At this point, the team may feel little obligation to fix broken builds immediately, and builds may stay broken on the build server.

5. Unit tests

Developers who favor the “TDD” (Test Driven Development) approach begin by writing tests to check whether code returns output expected. Initially, the test would fail. The first lines within a module may be error checking code that returns negative responses.

But when working code is added that return positive results, that code is counted as “done”.

This approach enables a tenant of an Agile programming principle of measuring completion by working software rather than just the number of lines of code written.

In the Java world, JUnit is the de facto standard, although TestNG is also popular. For C# applications, the NUnit testing framework proposes similar functionalities to those provided by JUnit, as does Test::Unit for Ruby. For C/C+ +, there is CppUnit. PHP developers can use PHPUnit.

Becuase “xUnit” tools all create result reports in XML format, they can be displayed using the same “xUnit Plugin” for Jenkins.

Jenkins makes the distinction between failed builds and unstable builds. Failed builds are indicated by a red ball. Unstable builds not considered of sufficient quality are inicated by a yellow ball.

6. Code repositories

When developers commit their work to a team repository in small increments of working code, the team’s actual progress can be measured in those increments. This may save time reporting progress. Those who want to know can simply view a dashboard.

Use of a versioned repositories enable the team to back out a release and roll back to a previous release if something goes horribly wrong.

Also, when a developer retrieves team-level code from the team repository (using a Git fetch or pull command), Git automatically identifies whether several people have worked on the same lines in the same file.

7. Team-level system tests

Even when code does not directly conflict with another developer’s work, running programs may conflict with others who share memory or packages or other resources.

Running individual unit tests along with other unit tests may reveal conflicts.

For example, one integration conflicts between components is when an element is removed in the receiver code but not in the caller code of an API call.

8. Code quality checks

Developers are increasingly adding use of code scanners such as SonarQube to automatically ensure that all code complies with rules. Software named CAST analyzes code for the system as a whole.

Note even with software scans, there is still value in a team talking about each other’s work.

However, automation enables the team to focus on more substantive topics because each individual can deal on their own with repetitive issues that can be identified automatically.

9. Automated runs

The introduction of a continuous integration server such as Jenkins enable event rather than calendar driven kick off of build automation.

Such an event is typically when new code is committed to the version control system. The mechanism is called a “web hook” in GitHub/GitLab.

In addition to doing builds, Jenkins can kick off test automation if a build is successful. Such a mechanism is called “post-build actions”.

10. Code Coverage

There are programs such as SonarQube which identify whether test code covers (cause to execute) specific lines of code.

Continuous integration depend on a high percentage of code coverage to ensure that “everything” works prior to deployment.

11. Alerts

To notify people when builds fail, Jenkins can can be configured to send out emails which may contain logs and reports from builds and test jobs.

Jenkins has a wide range of ready-built APIs to call which use more proactive channels such as Instant Messaging.

Such mechanisms are often used to run “smoke” tests to detect whether a system is still working so that people can be alerted to troubleshoot before too much time goes by.

But an organization cannot rely on alerts alone.

12. Predictions based on previous metric values

Performance or “speed” tests are often included to measure the response time for a single simulated user. Such metrics for various transactions in the program are maintained over time so that developers are alerted if response times suddenly slower than historical averages.

This provides an early warning system for possible issues before additional labor is spent on an architecture that needs re-design.

Tools such as PagerDuty are used to specify escalation points of contact for each application.

13. Metrics dashboard and retention

By default, Jenkins maintains a history of builds.

Annotations, such as the origin of test failures and how to fix them, can be added to a job using the Edit Description link in the top right-hand corner of the Jenkins screen.

However, Jenkins does not aggregate a group of jobs for display in a single dashboard.

A dedicated computer monitor dedicated to such constant display (like at an airport) is often called a “build radiator”.

Some dashboards consolidate measures of efficiency and effectiveness into a single metric.

14. System-level integration tests

While individual unit tests typically use static (unchanging) data, tests of whether individual components “integrate” with each other tend to use more dynamic data which change during a test run.

Tests of API (Application Programming Interfaces) using “microservices” are conducted at this level.

This step requires the team to identify

15. User-level acceptance tests

Automation of tests that focus on what end-users do are called Behavior-Driven Development (BDD) and Acceptance-Test Driven Development (ATDD). Examples of tools for this are Cucumber, fitnesse, jbehave, rspec, easyb, etc.

The end-user focus also enable the automation that proves what user features have been implemented, and which remain to be done.

Stress and load testing are done at this level.

16. Server monitoring

Measurement of server status and resource usage is necessary especially during stress and load tests.

Software in this category include Nagios, Splunk, AppDynamics, New Relic, etc.

Trends in measurement history over time should be analyzed to determine trends which may impact capacity.

17. Regression testing

Regression testing ensure that everything still works after changes are made.

Investments in automation are returned in savings from wasted labor repeating tests during each cycle of change.

Individual xUnit tests can be temporarily deactivated using the @Test(enabled=false) annotation.

18. Continuous Deployment to production

The use of build and test automation enables a team to have confidence that potential defects can be caught before appearing in production use by end-users (customers).

Trust in automation is what enables continuous deployment directly into production.

But to those comfortable to a traditional waterfall approach, quick movement into production is one of many steps where action can stagnate.

Comprehensive tests enable a “fail fast, recover fast” capability that remove the fear from the experimentation necessary to increase innovation.

The availability of a vast number of servers available instantly from a public cloud vendor means that entire stacks of servers can be stood up such that several stacks can operate production loads at the same time.

This makes it possible to quickly switch between one version to another, even after deployment. Many call this the “blue/green” strategy.

19. Change acceleration

Automation described above accelerate what people can do manually. Such is measured by how quickly changes go from concept through various environments and finally into production.

This enables business agility – the ability to respond more quickly to changes in market forces and customer needs.

Organizations that move quicker than its competitors are at a significant, fundamental competitive advantage.

So investment need to be driven from the corporate top-level down and fostered from the bottom-up.

This means not just posters and team-building retreats, but professionally designed change management programs, technical hackathons, and training for directors and program managers as well as technicans.

“you can’t buy devops. you have to sell it”. To everyone.

So working out the psychological and political issues is, in my experience, are far more important to actual adoption success than technical and financial factors.

More

This is one of a series on Git and GitHub:

  1. Git and GitHub videos

  2. Why Git? (file-based backups vs Git clone)
  3. Git HEAD (Commitish references)
  4. Git-client based workflows
  5. Git whoops (correct mistakes)
  6. Git rebase
  7. Git interactive merge (imerge)
  8. Git commits with a Tag and Signature

  9. Git custom commands
  10. Git utilities
  11. Git hooks

  12. GitHub data security

  13. TFS vs GitHub
  14. GitHub REST API
  15. GitHub GraphQL API
  16. GitHub PowerShell API Programming
  17. GitHub GraphQL PowerShell Module