How organizations go faster
- 1. OS Package managers
- 2. Language-based packages</a>
- 3. Source repositories</a>
- 4. Binary repositories
- 5. Build scripts
- 6. Scheduled jobs on deploy server
- 7. Deploy Server Configuration Management
- 8. Automated jobs
- 9. Secrets Vault
- 10. Unit tests
- 11. Code quality checks
- 12. Team-level system tests
- 13. Code Coverage
- 14. Alerts
- 15. Predictions based on previous metric values
- 16. Metrics dashboard and retention
- 17. System-level integration tests
- 18. User-level acceptance tests
- 19. Server monitoring
- 20. Regression testing
- 21. Continuous Deployment to production
- PROTIP: Change acceleration
Here is my observation about a pattern of technology adoption for CI/CD (Continuous Integration/Continuous Deployment). Some may call this levels of “maturity”. But you can use it as a road-map for both skill-building and realizing advantages from “DevOps” automation.
- OS Package managers
- Language-based package managers
- Source repositories
- Binary repositories
- Build scripts
- Automated jobs
- Builds on deploy server
- Deploy Server configuration management
- Secrets Vault
- Unit tests
- Code quality checks
- Team-level system tests
- Code Coverage
- Predictions based on previous metric values
- Metrics dashboard and retention
- System level integration tests
- User-level acceptance tests
- Server monitoring
- Regression testing
- Continuous deployment to production Change aceleration
1. OS Package managers
Installing computer programs onto a computer’s operating system has been done manually by downloading and invoking GUI programs, then clicking “Next”, etc.
With this approach, manual effort is also needed to configure, remove, and upgade versions. That’s a hassle.
Replacement of this manual effort with automation is the “heart” of technologies for Continuous Integration.
Each operating system has its own way of automating dependencies to custom code. Different flavors of Linux machines each come with their own manager:
- Chocolatey.org for Windows
- apt-get for Red Hat and Ubuntu,
- yum for Debian,
- Homebrew for MacOS, which has commands such as:
brew install make
By contrast, Chocolatey exposes NuGet packages with a simple command like Homebrew does:
choco install make
Package managers can automatically update ALL libraries installed by a single command, such as:
2. Language-based packages</a>
Each computer language has a different approach to obtaining dependencies of code. Each retrieves modules from a public repository.
PROTIP: Greenkeeper identifies changes in dependencies in npm packages on GitHub.
Java has Maven referencing http://search.maven.org/#browse with alternatives in Ant Ivy
Ruby has RPM (Ruby Package Manager) managing Ruby Gems.
Python has PIP and https://pypi.python.org/pypi
3. Source repositories</a>
When Microsoft and Google both abondon their own repositories and move their open-source code to GitHub, it’s pretty clear that GitHub outright “owns” public source repositories.
More precisely, the Git repository format is the international standard also used in competitors of GitHub – GitLab, etc. rather than previous tools such as SubVersion.
The “Configuration as code” movement within DevOps is pressuring organizations to store server configuration code using the same Git mechanism as app developers.
All this is so changes to human-readable text is identified by who made specific changes, when the change occurred, and why (as described in commit messages).
Use of a versioned repository enable the team to back out a release and roll back to a previous release if something goes horribly wrong.
Also, when a developer retrieves team-level code from the team repository (using a Git fetch or pull command), Git automatically identifies whether several people have worked on the same lines in the same file.
When developers commit their work to a team repository in small increments of working code, the team’s actual progress can be measured in those increments. This may save time reporting progress. Those who want to know can simply view a dashboard.
4. Binary repositories
For files meant to be read by computers, such as those created by compilation and builds (Windows .dll files) and graphics files, there is Artifactory and Nexus.
They expose security-vetted editions of publicly available installers like Homebrew, but internally.
5. Build scripts
When an individual developer works alone on a Java package, a file written to control the Make or Ant build tool to perform builds saves a few seconds versus typing commands.
But more importantly than saving time, because modern apps are tested with a specific version of many dependencies, a pom.xml file specifies all them, for consistency.
pom.xml configuration files are stored along with code in a Git repository.
To invoke Make to process the pom.xml file, there is one command:
The default actions are
install, which means
to download and expand the correct version of all dependencies noted in the file.
6. Scheduled jobs on deploy server
When developers work together, a separate server running build scripts avoids disturbing individual workers’s laptops.
Running build scripts nightly ensures that not much time goes by before catching issues in builds. The simplest integration test is whether Java or C# code can all compile and be assembled into a final executable.
Such runs can be simply initiated on a nightly schedule by a cron utility program that comes with Linux operating systems.
Scheduled smoke tests are also useful to determine whether applications can still be used, such as sign-in and out. These run early in the morning provide “early warning” for people to take remedial action before others begin work depending on the app.
Scripts running every few minutes provide constant activity ensure that processes remain in the server’s memory and thus avoid delays for the first person who signs in early in the day.
7. Deploy Server Configuration Management
On the server, specific versions of various packages (from JVM to applicaton servers) need to be installed in a specific order.
That is the job of configuration management scripts processed by tools such as Ansible, Chef, Puppet, Salt.
8. Automated jobs
The introduction of a continuous integration server (such as Jenkins, Bamboo, Travis-CI, Circle-CI, etc.) enable event rather than schedule-driven kick off of build automation.
A typical event is when new code is committed to the version control system. The mechanism is called a “web hook” in GitHub/GitLab.
Jenkins build software provides more features to automate jobs than simple cron processes.
In addition to doing builds, Jenkins can kick off test automation if a build is successful. Such a mechanism is called “post-build actions” within Jenkins.
PROTIP: Jenkins can be configured to stand-up entire environments down-stream immediately when a build is successful. This is an improvement over waiting for development work to be done before provisioning system testing servers. This is made possible by the availability of cheaper servers and automation.
Jenkins has dozens of add-ons to perform additional work.
9. Secrets Vault
For convenience, developers leave passwords in source code to avoid typing them in all the time. Secrets are needed to call database and APIs.
But passwords should not be stored openly in public repositories. Hackers use “dorking” scripts to scan code repositories for passwords.
Hashicorp created Vault.io to provide secure access to secrets, in a unified way. It takes care of secure storage with detailed audit logs as well as key rolling (leasing and renewal) and revocation.
The “Jenkins Vault” plug-in enables Jenkins jobs to obtain secrets.
10. Unit tests
Developers who favor the “TDD” (Test Driven Development) approach begin by writing tests to check whether code returns output expected.
The first lines coded of a module may be error checking code that returns negative responses.
Initially, the test would fail because the code has not yet been written.
But when working code is added that return positive results, that code can be counted as “done”.
This approach enables a tenant of an Agile programming principle of measuring completion by working software rather than just the number of lines of code written.
In the Java world, JUnit is the de facto standard, although TestNG is also popular. For C# applications, the NUnit testing framework proposes similar functionalities to those provided by JUnit, as does Test::Unit for Ruby. For C/C+ +, there is CppUnit. PHP developers can use PHPUnit.
Becuase “xUnit” tools all create result reports in XML format, they can be displayed using the same “xUnit Plugin” for Jenkins.
Jenkins makes the distinction between failed builds and unstable builds. Failed builds are indicated by a red ball. Unstable builds not considered of sufficient quality are inicated by a yellow ball.
11. Code quality checks
Developers are increasingly adding use of code scanners such as SonarQube to automatically ensure that all code complies with rules. Software named CAST analyzes code for the system as a whole.
Note even with software scans, there is still value in a team talking about each other’s work.
However, automation enables the team to focus on more substantive topics because each individual can deal on their own with repetitive issues that can be identified automatically.
12. Team-level system tests
Even when code does not directly conflict with another developer’s work, running programs may conflict with others who share memory or packages or other resources.
Running individual unit tests along with other unit tests may reveal conflicts.
For example, one integration conflicts between components is when an element is removed in the receiver code but not in the caller code of an API call.
13. Code Coverage
Continuous integration depend on a high percentage of code coverage to ensure that “everything” works prior to deployment.
There are apps such as SonarQube which identify whether test code covers (cause to execute) specific lines of code.
To notify people when builds fail, Jenkins can can be configured to send out emails which may contain logs and reports from builds and test jobs.
Jenkins has a wide range of ready-built APIs to call which use more proactive channels such as Instant Messaging.
Such mechanisms are often used to run “smoke” tests to detect whether a system is still working so that people can be alerted to troubleshoot before too much time goes by.
But an organization cannot rely on alerts alone.
15. Predictions based on previous metric values
Performance or “speed” tests are often included to measure the response time for a single simulated user. Such metrics for various transactions in the program are maintained over time so that developers are alerted if response times suddenly slower than historical averages.
This provides an early warning system for possible issues before additional labor is spent on an architecture that needs re-design.
Tools such as PagerDuty are used to specify escalation points of contact for each application.
16. Metrics dashboard and retention
By default, Jenkins maintains a history of builds.
Annotations, such as the origin of test failures and how to fix them, can be added to a job using the Edit Description link in the top right-hand corner of the Jenkins screen.
However, Jenkins does not aggregate a group of jobs for display in a single dashboard.
A dedicated computer monitor dedicated to such constant display (like at an airport) is often called a “build radiator”.
Some dashboards consolidate measures of efficiency and effectiveness into a single metric.
17. System-level integration tests
While individual unit tests typically use static (unchanging) data, tests of whether individual components “integrate” with each other tend to use more dynamic data which change during a test run.
Tests of API (Application Programming Interfaces) using “microservices” are conducted at this level.
This step requires the team to identify
18. User-level acceptance tests
Automation of tests that focus on what end-users do are called Behavior-Driven Development (BDD) and Acceptance-Test Driven Development (ATDD). Examples of tools for this are Cucumber, fitnesse, jbehave, rspec, easyb, etc.
The end-user focus also enable the automation that proves what user features have been implemented, and which remain to be done.
Stress and load testing are done at this level.
19. Server monitoring
Measurement of server status and resource usage is necessary especially during stress and load tests.
Software in this category include Nagios, Splunk, AppDynamics, New Relic, etc.
Trends in measurement history over time should be analyzed to determine trends which may impact capacity.
20. Regression testing
Regression testing ensure that everything still works after changes are made.
Investments in automation are returned in savings from wasted labor repeating tests during each cycle of change.
Individual xUnit tests can be temporarily deactivated using the
21. Continuous Deployment to production
The use of build and test automation enables a team to have confidence that potential defects can be caught before appearing in production use by end-users (customers).
Trust in automation is what enables continuous deployment directly into production.
But to those comfortable to a traditional waterfall approach, quick movement into production is one of many steps where action can stagnate.
Comprehensive tests enable a “fail fast, recover fast” capability that remove the fear from the experimentation necessary to increase innovation.
The availability of a vast number of servers available instantly from a public cloud vendor means that entire stacks of servers can be stood up such that several stacks can operate production loads at the same time.
This makes it possible to quickly switch between one version to another, even after deployment. Many call this the “blue/green” strategy.
PROTIP: Change acceleration
Automation described above accelerate what people can do manually. Such is measured by how quickly changes go from concept through various environments and finally into production.
This enables business agility – the ability to respond more quickly to changes in market forces and customer needs.
Organizations that move quicker than its competitors are at a significant, fundamental competitive advantage.
So investment need to be driven from the corporate top-level down and fostered from the bottom-up.
This means not just posters and team-building retreats, but professionally designed change management programs, technical hackathons, and training for directors and program managers as well as technicans.
“you can’t buy devops. you have to sell it”. To everyone.
So working out the psychological and political issues is, in my experience, are far more important to actual adoption success than technical and financial factors.
This is one of a series on Git and GitHub:
- Why Git? (file-based backups vs Git clone)
- Git HEAD (Commitish references)
- Git-client based workflows
- Git whoops (correct mistakes)
- Git rebase
- Git interactive merge (imerge)
- Git custom commands
- Git utilities
- TFS vs GitHub
- GitHub REST API
- GitHub GraphQL API
- GitHub PowerShell API Programming
- GitHub GraphQL PowerShell Module