Wilson Mar bio photo

Wilson Mar

Hello!

Calendar YouTube Github

LinkedIn

Impose load remotely from Docker instances in the AWS cloud

US (English)   Norsk (Norwegian)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Estonian   اَلْعَرَبِيَّةُ (Egypt Arabic)   Napali   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

The diagram here describes progress toward distributing runs of JMeter within EC2 and/or Docker, and scaling those instances to increase load on app servers. Each step is a deliverable within the sequence of MVP (Minimim Viable Product) stages.

NOTE: Content here are my personal opinions, and not intended to represent any employer (past or present). “PROTIP:” here highlight information I haven’t seen elsewhere on the internet because it is hard-won, little-know but significant facts based on my personal research and experience.

Setup Scenarios

The components necessary for performance/capacity emulating scripting and test runs are:

a. The application under test. I’ve used Dave Hoeffer’s the-internet because it is intended as a set of JavaScript challenges for scripting user emulation scripts. There are other sample apps.

b. App hosting environment. Dave Hoeffer has graciously created an instance on Heroku for single-user runs during scripting. But for load/capacity tests, we need to create a stand-alone app instance within a cloud. “the-internet” has a Docker image to run multiple users.

c. Emulator program (such as JMeter) to control 1 or a lot of emulated client instances running emulation scripts at the same time.

d. The Emulator hosting environment needs to be separate from the app environment under load. SaaS services (Blazemeter, StormRunner, Flood.io, etc.) can provide this environment. With more work, A Docker image from DockHub can be pulled locally or in a public cloud instance created using AWS CDK, Terraform, etc.

e. CI/CD workflow engine builds the app under test and test for security, functionality, capacity capability, etc. A Docker image of the free/open-source Jenkins can be used locally or in a public cloud. SaaS services (Harness.io, CircleCI, GitHub Actions, etc.) can provide this as well.

f. Monitoring (Metrics, Diagnostics, Logging) of the environment running the app: show metrics to identify trends, Diagnostics to pin-point bottlenecks, and logs to identify root causes.

Scenario a. App Under Test b. App host env c. Emulator pgm. d. Emulator hosting e. CI/CD f. Monitoring
A. Blazemeter - single trans. (the-internet) Dave's Heroku (JMeter) Blazemeter
B. Blazemeter - multi-trans. (the-internet in Docker) Your AWS ECS (JMeter)
C. Local (offline) Apache web Docker JMeter local Docker N/A (Jenkins) N/A
D. AWS with CI/CD custom (Apigee) Custom (AWS ECS/K8s) JMeter Your AWS ECS Cloudbees, CircleCI, etc. AWS Monitoring

Scenarios

A. If your app under test can be reached from the public internet (such as “the-internet” running on Dave’s own Heroku instance), you don’t need to install an emulator (such as JMeter) on your laptop if you use Blazemeter SaaS, which provides a quick and easy way to begin. But please don’t run more than one user at a time.

Blazemeter runs JMeter scripts you upload from your laptop.

B. To run multiple users at a time, pull both “the-internet” app and emulator (JMeter) images from DockerHub and run them in your own cloud instance (within AWS ECS, Azure, GCP, Blue Ocean, etc.). AWS ECS is usually enough (without Kubernetes) because the number of emulator (JMeter) instances is usually fixed before a test run (and adjusted after).

C. If you want to create emulator scripts offline on your laptop (one with enough memory), run several Docker images. You may not have enough power to run a conventional CI/CD (such as Jenkins) or much monitoring, thus the “N/A”.

D. There are other scenarios, but the most common scenario is standing up two cloud instances: an AWS ECS/K8S instance to run the app under test (such as Apigee) and another to run JMeter for performance/capacity testing. The flowchart below describes the intricacies that goes with such a setup:

AWS with CI/CD (Scenario D)


To keep it simple, let’s say our system under test on-prem. consists of (1) a server responding to API requests behind a governance proxy such as Apigee. The API front-end needs to be setup first because it authenticates requests based on pre-assigned tokens provided to those who call the service.

A (2) Monitoring agent on each server (such as Dynatrace, Telegraf, SignalFx, etc.) collects various metrics for display on the vendor’s Dashboard.

Now we can begin to construct (3) JMeter scripts that impose artificial loads. From a laptop, we can only impose a limited load. But that is OK because we use laptops just to craft scripts. Once viable, the scripts, along with associated files, are pushed into a (4) private Version Control repository such as AWS Code Commit. Within security-conscious enterprises, instead of downloading installer packages from the internet, it is safer to obtain installers that have been vetted by Security specialists before being made available from a (5) private repository such as Artifactory or Docker Trusted Registry (DTR). A lot of work is needed to vet the many dependencies for those who prefer to build machines from the ground up using Configuration as Code (CaC), a practice that enables them to quickly respond to issues by being able to quickly change anything within the tech stack.

To make use of the Amazon cloud, on the laptop we install the (7) AWS CLI and associated tools to craft (8) Cloud Formation files that instantiate services such as EC2 with Docker to run server programs within the AWS Cloud. Within AWS, we (9) instantiate images containing JMeter using those common scripts in the code repository.

Before we run, we should (10) lint and audit the containers using various tools.

When we need to add more JMeter instances to impose a heavier load, we can use a (11) JMeter Master to coordinate the Jenkins slave nodes. The Master starts a fixed number of nodes to test (12) app auto-scaling mechanisms.

Next, we’ll look at (13) configuration settings for the cloud, such as AWS affinity groups to specify low latency between servers within the same Availability Zone.

When Configurations settings are under version control, changes can (14) trigger (15) CI/CD to automatically initiate test runs. If the analytics system has enough history, it can (16) recognize trends and, if anomalies are identified, issue (17) alerts while the changes are still fresh in the mind of the person who made the change.

Because network traffic between on-premises servers and load generators in the cloud is subject to significant variability, it would be ideal to have a (18) load generator near each machine under test. But it can be problematic going through the corporate firewall.

It might be easier to make use of a (19) web-based SaaS service such as Blazemeter or Flood.io. With them, we just upload a script and they handle the rest, such as configuring enough machines.


Steps

Below are more details about each deliverable:

cloud-jmeter-flow-1256x741.png

  1. Setup the application under test (on-prem), with API tokens and/or GUI User ID/Password.

    For the purpose of this exercise, we run a simple “hello world” program in the background. A real production configuration would have a load-balanced API Gateway service in front of machines responding to API requests.

    PROTIP: We’ll need several types of tokens. We generally use tokens with a lot of credits for stress or soak testing. We also need one with no credits to test rejection mechanisms. And an automated way is needed to reset tokens after each test.

  2. Install monitoring (Dynatrace, SignalFx, Splunk, etc.) with a dashboard showing analytics visualization from data collected.

    (The InfluxDB time-series database and Grafana analytics visualization tools are popular.) InfluxDB has no external dependencies and provides a SQL-like language with built in time-centric functions.

    Each time-series dataset contains several key-value pairs, consisting of the fieldset and a timestamp.

    The monitoring software can be adopted for collecting JMeter statistics.

  3. On a laptop, install and run a single instance of JMeter.

    Pre-requisites to JMeter is a Java Virtual Machine.

    The assumption is that JMeter has been installed. There are different installation processes for Windows vs. MacOS vs. Linux machines.

    See TODO: Install JMeter shell script.

  4. Version Control within the cloud (AWS Code Commit)

  5. Identify installers, vet them, and store versions in Artifactory.

  6. Install Docker within EC2.

  7. Install AWS CLI and dependencies Python, jq, cf-lint, etc.

  8. Code AWS Cloud Formation (CF) to create within the AWS cloud a EC2/Docker instance, JMeter, JMeter

    One of the advantages of Docker that, once encapsulated within a Docker container, that container can be run unmodified on various operating systems (Windows, MacOS, Linux, etc.).

    Details of selecting or building an image, then creating a Dockerfile to use that image are here.

    Each JMeter host (server) process uses two ports; one to listen for instructions from the master and another to write responses back to the master. The server image exposed two ports for this purpose.

    Started n-instances of jmeter-server. Each of which was bound to two well known ports on the host.

    Determine IP addresses from the container ID of the server instance.

    Started the Jmeter client (master). The client image was crafted to receive the location of the remote server instances during invocation and write its log & test results back to the host

    When the JMeter client started up it connected with every server instance. I monitored the master’s log file on the host for all the action. When the tests completed I simply removed all the Docker containers. This left me with just the logs & test results!

    The Master sends JMX files to slave nodes.

    Configure Master machine with an equitable number of users (for 500 users total on 2 slaves, setup 250 each).

    All systems should have the same version of Java and JMeter.

    All systems should be connected to each other in the same subnet.

  9. Load JMeter script.

  10. Install auditctl from Center for Internet Security (cisecurity.org) and run Docker deamon to audit Docker events.

    CIS Docker CE benchmark

    https://github.com/docker/docker-bench-security

    The Auditing and Testing Framework (http://inspec.io) is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements. It is implemented in the inspec CLI command running on Debian, Ubuntu, CentOS. Its DevSec Hardening Framework defines rules in a yaml attribute file.

    The CIS Docker Benchmark Profile at ttps://github.com/dev-sec/cis-docker-benchmark

    https://www.cisecurity.org/cis-benchmarks/#docker

    To run Docker daemon to trigger:

    dockerd -v
  11. Configure a Master instance to control JMeter slaves

    When there is more than one JMeter instance, a master instance is needed to send instructions and receive responses.

    1. One each JMeter node console, identify the IP addresses of the slave machines using “ifconfig” for the “inet” to “en0” entry.
    2. Within the the master’s bin folder, edit file jmeter.properties.
    3. Find the “remote_hosts” and un-comment the line by removing the “#” on the left.
    4. Use commas to separate multiple IP addresses. Save the file.
    5. To enable remote start from the Master machine, generates file rmi_keystore.jks by running create-rmi-keystore.sh (or .bat). The “First and last name:” has to be “rmi” (remote method execution). Supply a password you’ve written down. See https://jmeter.apache.org/usermanual/remote-test.html

    6. Copy the file to the bin folder of all slave nodes. Reference the property “server.rmi.ssl.keystore.file”.

    7. Start JMeter in GUI mode:
    sh jmeter-server.sh

    See menu Run, Remote Start to verify its IP address.

    Alternately, to run in non-GUI mode using the “-n” flag:

    sh jmeter.sh -n -t "/$JMETER_PATH" -R 192.168.1.2

    After run, view JMeter’s output results file.

    PROTIP: Several runs are usually necessary to identify the number of virtual users which can be supported on a single machine. Configure Master machine with an equitable number of users (for 100 users total on 2 slaves, setup 50 each). The above is based on https://www.youtube.com/watch?v=Ok8Cqc0wipk

  12. Verify app auto-scaling.

    Driver must:

    • Create the specified number of JMeter server containers
    • Create the JMeter master container
    • Fire off the test
    • Wait for the test to complete
    • Remove all the containers

    It took some scripting foo along with some Docker image revisions. I now have a setup that allows me to:

    driver.sh -s jmxfile.jmx \
    -d data-dir -n 8
  13. Identify change trigger.

  14. Kick off CI/CD job.

  15. Trends.

  16. Alerts.

  17. Create a JMeter instance near front-end (API) server

  18. Bring JMeter script to SaaS cloud performance testing service.

You can find the work referenced in this blog at:

https://github.com/srivaths/jmeter-driver https://github.com/srivaths/jmeter-base https://github.com/srivaths/jmeter https://github.com/srivaths/jmeter-server I gave a lightning talk on this work. The slide deck I used for it is at http://www.slideshare.net/srivaths_sankaran/jmeter-docker-sitting-in-a-tree.

(for example, https://aqueduct.flood.io/ to get through from internal IP’s in the cloud through a firewall exiting as TLS pipe with port 80/443. Similar to Ngrok. Flood.io filters out )


Dockerfile

A Dockerfile contains all commands necessary to Docker to assemble an image. It is not a program like Java. It is a Domain Specific Language (DSL).

  1. The sample Dockerfile assumes these environment variables have been defined prior to execution:

    export JMETER_HOME="/usr/local/bin/jmeter"
    export JMETER_VERSION="5.0"
    export MIRROR_HOST="???"
    export JMETER_DOWNLOAD_URL="???"
    export JMETER_PLUGINS_DOWNLOAD_URL="???"
    export JMETER_PLUGINS_FOLDER="???"
    
  2. At “# 2”, put your name in place of:

    LABEL maintainer="wilsonmar@gmail.com"
  3. At “# 3”, notice that the version at the time of writing is 3.3. In order to update it yourself, you would need to test it and put a new image in DockerHub.

  4. At “# 5” edit the time zone from “Europe/Rome” https://www.zeitverschiebung.net/en/timezone/europe–rome

    https://www.baeldung.com/java-daylight-savings referencing http://www.iana.org/time-zones

  5. Switch to edit another file: /etc/sysconfig/clock and change the UTC line to: “UTC=true”.

  6. Save the file using the keystrokes for the editor you’re using.

    A Docker volume is created to exchange files with the container.

    ll -ltr tmp/

Dockerize

Docker commands are issued from a CLI or a shell script:

docker build -t jmeter path to Dockerfile 

During the build process, many network contents can be fetched, so the time it takes can vary. The last message should read:

Successfully tagged jmeter:latest

After the contents type can vary, from a simple text file to archive package (e.g. zip, tar.gz, rpm, deb, etc). Afterwards, these files are “installed” on the image with specific commands (e.g. copy for text file, unzip for zip, tar for tar.gz, etc).

Docker images run within a Docker service running in the background.

docker run -t image_name arguments 

DockerHub

To minimize cost, we want to use a Docker image with the minimum memory requirement.

After a review of alternative images identified from a search of DockerHub, the one with the smallest memory is Alpine Linux. Its 200 MB is so small it can run on a Raspberry Pi.

“Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage.” – https://hub.docker.com/_/alpine/ lists Alpine Docker images.

See https://wiki.alpinelinux.org/wiki/Setting_the_timezone

  1. Verify whether (by download or by Dockerizing), we now should have an image file:

    docker image ls

Docker launch.sh

  1. Open the launch.sh file in a text editor.

    docker volume create volume name

    If not all information is provided, Docker chooses all the volume configuration details for us (e.g. real path on host machine). With the command

    docker volume inspect volume name

    it’s possible to retrieve where the volume is mapped on the test machine.

    If the test machine is on Windows or you don’t want to create a stand alone volume, you can specify the volume directly with the container execution command line via arguments.

  2. Execute the containers:

    https://www.blazemeter.com/blog/make-use-of-docker-with-jmeter-learn-how

    Passing JMeter arguments with the “docker run” command (e.g. which jmx script must be executed, script parameters, etc)

    Then, fetching the result file (e.g. jtl and log file) using a shared folder on the test machine called Docker volume, that can be used to save result files after the container execution ends.

    If the container modifies the file system, it does not persist after the container finishes. So to obtain JMeter results it’s necessary to set up an exchange folder with the ‘volume’ command.

    On the left you can see our test machine that hosts the JMeter containers and the Docker volume. The volume is used to provide a JMX script file to be executed, and to retrieve from the container the JTL result file and the LOG file on execution.

Application under test

In this example, the container starts and as a first action it executes a JMeter application with arguments passed with the “docker run” command. When JMeter completes its execution, the container stops itself, leaving the JMeter result files in the Docker volume.

With the script build.sh the Docker image can be build from the Dockerfile but this is not really necessary as you may use your own docker build commandline. Build Options

Build argumments (see build.sh) with default values if not passed to build:

JMETER_VERSION - JMeter version, default 3.3
IMAGE_TIMEZONE - timezone of Docker image, default "Europe/Amsterdam"
NB IMAGE_TIMEZONE setting is not working yet.
Running

The Docker image will accept the same parameters as jmeter itself, assuming you run JMeter non-GUI with -n.

There is a shorthand run.sh command. See test.sh for an example of how to call run.sh. User Defined Variables

This is a standard facility of JMeter: settings in a JMX test script may be defined symbolically and substituted at runtime via the commandline. These are called JMeter User Defined Variables or UDVs.

See test.sh and the trivial test plan for an example of UDVs passed to the Docker image via run.sh.

See also: http://blog.novatec-gmbh.de/how-to-pass-command-line-properties-to-a-jmeter-testplan/ Specifics

The Docker image will install (via Alpine apk) several required packages most specificly the OpenJDK Java JRE. JMeter is installed by simply downloading/unpacking a .tgz archive from http://mirror.serversupportforum.de/apache/jmeter/binaries within the Docker image.

A generic entrypoint.sh is copied into the Docker image and will be the script that is run when the Docker container is run. The entrypoint.sh simply calls jmeter passing all argumets provided to the Docker container, see run.sh script:

sudo docker run –name ${NAME} -i -v ${WORK_DIR}:${WORK_DIR} -w ${WORK_DIR} ${IMAGE} $@


Run Docker with monitoring and with auditing on

  1. Install monitoring (Dynatrace)

  2. Run scanner CIS benchmark:

    To remove “Ensure auditing is configured” messages,

  3. Install on Ubuntu:

    sudo apt install -y auditd
  4. Confirm whether auditd is installed (using Linux command):

    command -v auditd
  5. Using a text editor, prevent an error by editing file tests/1host_configuration.sh</tt> so check_1() contains not “docker” but:

    file="/usr/bin/docckerd"
  6. Obtain Process ID:

    pidof auditd
  7. Get the report:

    sudo aureport
  8. Install auditctl to obtain Docker audit events.

  9. Run Docker daemon to trigger:

    dockerd -v
  10. Get event id number from “/usr/bin/dockerd 1000 422”:

    sudo aureport -k
  11. Obtain report by searching the audit log:

    sudo ausearch --event 422 | sudo aureport -f -i
  12. Create a file for each watched rule:

    for i in "${files[0]}"; do sudo auditctl -w $i -k docker; done
  13. Make sure rules have been applied to the framework:

    sudo auditctl -l
    • /usr/bin/dockerd
    • /var/lib/docker/
    • /etc/docker/
    • /lib/systemd/system/docker.service
    • /lib/systemd/system/docker.socket
    • /etc/default/docker
    • /etc/docker/daemon.json
    • /usr/bin/docker-ccontainerd
    • /usr/bin/docker-runc

  14. Make the rules permanent:

    sudo sh -c "auditcctl -l >> /etc/audit/audit.rules"

Install dashboard for analytics visualization

The InfluxDB time-series database and Grafana analytics visualization tools are popular.

### InfluxDB

Here we adopt InfluxDB to collect JMeter statistics.

Each InfluxDB dataset contains key-value pairs consisting of the fieldset and a timestamp.

docker run --rm \
      --name influxdb \
      -dit \
      --net $TIME_SERIES_NET \
      -e INFLUXDB_DB=db0 \
      -e INFLUXDB_ADMIN_ENABLED=true \
      -e INFLUXDB_ADMIN_USER=admin \
      -e INFLUXDB_ADMIN_PASSWORD=passw0rd \
      -e INFLUXDB_USER=grafana \
      -e INFLUXDB_USER_PASSWORD=dbpassw0rd \
      -v $INFLUXDB_VOLUME:/var/lib/influxdb \
      influxdb
   

--rm removes the container after run conclusion, to avoid container information being preserved during restart.

--name is the running container name, also used as a domain name in the Docker network.

--dit runs the container in the background with a local shell, ready to be used like a remote ssh command line.

--net assigns a working virtualized network handled by Docker.

-e pass the environment variable to the recently created container. In this case we configured:

    INFLUXDB_DB - local database name
    INFLUXDB_ADMIN_ENABLED, INFLUXDB_ADMIN_USER and INFLUXDB_ADMIN_PASSWORD configures availability of the admin profile
    INFLUXDB_USER and INFLUXDB_USER_PASSWORD - configure the standard user profile used by Grafana.
   

-v assigns a logical volume on the hosting machine. It persists on container restart. Use of this volume limits disk space to only necessary data.

InfluxDB has no external dependencies and provides a SQL-like language with built in time-centric functions.

### Grafana

Grafana is not directly connected to JMeter, but can be added to our process via Docker.

  1. To download the 241MB image from DockerHub:

    docker run -d -p 3000:3000 grafana/grafana
  2. Edit conf/grafana.ini

    See http://docs.grafana.org/installation/docker/ and http://docs.grafana.org/installation/configuration/#http-port

    Semicolons (the ; char) are the standard way to comment out lines in a .ini file.

  3. Define a profile for AWS to GF_AWS_PROFILES (e.g. GF_AWS_PROFILES=default another).

  4. Edit this sample run script to replace the default server name, secret, and AWS credentials for CloudWatch support:

    # Create a persistent volume for data in /var/lib/grafana (database and plugins):
    docker volume create grafana-storage
     
    docker run \
      -d \
      -p 3000:3000 \
      --name=grafana \
      -e "GF_SERVER_ROOT_URL=http://grafana.server.name" \
      -e "GF_SECURITY_ADMIN_PASSWORD=secret" \
      -e "GF_AWS_PROFILES=default" \
      -e "GF_AWS_default_ACCESS_KEY_ID=GF_AWS_${profile}_ACCESS_KEY_ID" \
      -e "GF_AWS_default_SECRET_ACCESS_KEY=GF_AWS_${profile}_SECRET_ACCESS_KEY" \
      -e "GF_AWS_default_REGION=GF_AWS_${profile}_REGION" \
      -v grafana-storage:/var/lib/grafana \
      grafana/grafana
    
  5. Open your browser to view Grafana at URL http://localhost:3000/.

    3000 is the default http port that Grafana listens to if you haven’t configured a different port.

    http://docs.grafana.org/guides/getting_started/

    An alternative to Grafana is a web app to explore JMeter results using Angular.js 1.0 & d3.js at http://smarigowda.github.io/ngd3jmeter/

Docker

Dockerfiles from:

  • https://hub.docker.com/r/justb4/jmeter is an Alpine Linux Docker image for Apache JMeter. This Docker image can be run as the jmeter command. It’s actively maintained.

    The Dockerfile in the GitHub need to be edited because JMeter is installed by downloading/unpacking a .tgz archive into the Docker image from the public http://mirror.serversupportforum.de/apache/jmeter/binaries. Enterprises would want to pull the image from an internal repository (such as Artifactory) after being vetted by Corporate Security.

    A note in the GitHub notes that the Dockerfile is adapted from:

    • https://github.com/hauptmedia/docker-jmeter and
    • https://github.com/hhcordero/docker-jmeter-server

UI, Load, and Performance Testing Your Websites on AWS [42:25] WEB306 at AWS re:Invent 2014 | Nov 18, 2014

From Srivaths Sankaran:

  • JMeter Cloud Using Docker Apr 9, 2015 [7:47] references https://srivaths.blogspot.co.uk/2014/08/distributed-jmeter-testing-using-docker.html

  • git clone https://github.com/smarigowda/jmeter-driver.git

https://app.pluralsight.com/player?course=securing-docker-platform Securing the Docker Platform

https://www.youtube.com/watch?v=R_-YivV_mKo jmeter-docker poc Jul 10, 2018 by Purshottam Tyagi at https://github.com/tyagipurshottam/jemter [sic]

Performance Engineering

https://www.guru99.com/performance-testing.html

Santosh Arakere Marigowda

  • Created an image to pull in while inside a Docker container:
docker pull santosharakerre/jmeter-base

Code for the above is from https://github.com/santosharakerre/jmeter-base

  • https://github.com/smarigowda/jmeter-driver

  • https://github.com/santosharakerre/

  • https://www.youtube.com/watch?v=ByxsqYN5tOw JMeter Cloud Using Docker Apr 9, 2015 [7:47]

  • https://www.youtube.com/watch?v=snq8OId8CGg JMeter 3.2 + InfluxDB + Grafana + Slack Using Docker Containers [12:53] May 8, 2017

JMeter | Remote Testing | Master Slave | Distributed Testing Jul 15, 2018 [17:54] by Raghav Pal who has a whole video “Automation Step by Step” playlist on JMeter

Pluralsight does not have a JMeter course as of March 1, 2019.

https://www.programmersought.com/article/18926104968/



Emulator programs

https://www.redline13.com/blog/open-architecture-with-aws/ RedLin313 SaaS runs on AWS

https://loadninja.com/ LoadNinja is a licensed cloud-based load testing tool empowers teams to record & instantly playback comprehensive load tests, without complex dynamic correlation & run these load tests in real browsers at scale.

NOTE: Microsoft exited software testing in 2019 by removing CodeUI and Visual Studio for Performance Testing.

JMeter

$25+ Master Apache JMeter From load testing to DevOps 2020-04-28 by Antonio Gomes Rodrigues, Philippe Mouawad, and Milamber, with preface by Alexander Podelko

NOTE: This is based on the structure of folders at

  • https://github.com/jmeterbyexample/jmeter-test-scripts
  • https://github.com/nighteblis/JmeterBook is a JMeter tutorial in Chinese and English at https://translate.google.com/translate?hl=&sl=auto&tl=en&u=https%3A%2F%2Fwww.hissummer.com%2F
  • https://github.com/Sunbird-Ed/sunbird-perf-tests
  • https://github.com/cf-identity/jmeter pulls in Apache JMeter so that new versions will not break runs by my scripts here. It also contains png files of metrics generated during the last run.
  • https://github.com/ambertests/JMeterExamples has a PropExample.jmx - Script which takes properties from the command-line

Others:

  • https://github.com/apolloclark/jmeter
  • https://github.com/mozilla/jmeter-scripts

Blazemeter

  • https://www.blazemeter.com/blog/make-use-of-docker-with-jmeter-learn-how
  • https://www.blazemeter.com/blog/jmeter-distributed-testing-with-docker

Other scenarios

https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/ Setting up a CI/CD pipeline by integrating Jenkins with AWS CodeBuild and AWS CodeDeploy by Noha Ghazal 29 OCT 2019


AWS

https://www.programmersought.com/article/18926104968/ How to use AWS EC2+Docker+JMeter to build a distributed load testing infrastructure

NaveenKumar Namachivayam of QAInsights - STAR: github.com/awslabs/distributed-load-testing-on-aws used by BLOG, Implementation Guide and VIDEO: Distributed Load Testing on AWS - Run JMeter Tests part 1, part 2

Jenkins

https://www.youtube.com/watch?v=E02iab7vZyg How to use JMeter in Jenkins

https://www.youtube.com/watch?v=E02iab7vZyg How To Use JMeter In Jenkins? Jenkins Report Generation | Performance Testing Tutorial | Edureka

https://performanceengineeringsite.wordpress.com/2017/11/01/automating-jmeter-run-using-jenkins-ci-cd/ Automating Jmeter run using Jenkins CI/CD: Load, APM, Log management tools ,Docker & Kubernetes

https://www.jenkins.io/doc/book/using/using-jmeter-with-jenkins/ Using JMeter with Jenkins (performance plugin, )

https://www.cloudbees.com/blog/how-integrate-jmeter-jenkins How to integrate JMeter into Jenkins by Dmitri Tikhansi from BlazeMeter.

https://www.vinsguru.com/best-practices-jmeter-performance-testing-in-continuous-delivery-pipeline/ Best Practices – JMeter – Adding Performance Testing in CI / CD Pipeline 2 Comments / Articles, AWS / Cloud, Best Practices, CI / CD / DevOps, Distributed Load Test, Framework, Jenkins, JMeter / By vIns / January 30, 2017


More on DevSecOps

This is one of a series on DevOps:

  1. DevOps_2.0
  2. ci-cd (Continuous Integration and Continuous Delivery)
  3. User Stories for DevOps
  4. Enterprise Software)

  5. Git and GitHub vs File Archival
  6. Git Commands and Statuses
  7. Git Commit, Tag, Push
  8. Git Utilities
  9. Data Security GitHub
  10. GitHub API
  11. TFS vs. GitHub

  12. Choices for DevOps Technologies
  13. Pulumi Infrastructure as Code (IaC)
  14. Java DevOps Workflow
  15. Okta for SSO & MFA

  16. AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
  17. AWS server deployment options
  18. AWS Load Balancers

  19. Cloud services comparisons (across vendors)
  20. Cloud regions (across vendors)
  21. AWS Virtual Private Cloud

  22. Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
  23. Azure Certifications
  24. Azure Cloud

  25. Azure Cloud Powershell
  26. Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
  27. Azure KSQL (Kusto Query Language) for Azure Monitor, etc.

  28. Azure Networking
  29. Azure Storage
  30. Azure Compute
  31. Azure Monitoring

  32. Digital Ocean
  33. Cloud Foundry

  34. Packer automation to build Vagrant images
  35. Terraform multi-cloud provisioning automation
  36. Hashicorp Vault and Consul to generate and hold secrets

  37. Powershell Ecosystem
  38. Powershell on MacOS
  39. Powershell Desired System Configuration

  40. Jenkins Server Setup
  41. Jenkins Plug-ins
  42. Jenkins Freestyle jobs
  43. Jenkins2 Pipeline jobs using Groovy code in Jenkinsfile

  44. Docker (Glossary, Ecosystem, Certification)
  45. Make Makefile for Docker
  46. Docker Setup and run Bash shell script
  47. Bash coding
  48. Docker Setup
  49. Dockerize apps
  50. Docker Registry

  51. Maven on MacOSX

  52. Ansible
  53. Kubernetes Operators
  54. OPA (Open Policy Agent) in Rego language

  55. MySQL Setup

  56. Threat Modeling
  57. SonarQube & SonarSource static code scan

  58. API Management Microsoft
  59. API Management Amazon

  60. Scenarios for load
  61. Chaos Engineering