INTRODUCTION
We are pleased to announce that the Api Load Testing open source is now available. Thousands of companies have taken advantage of Api Load Testing technology, released under an Open Source MIT license, to help them build scalable web services and other products.
Api Load Testing is a php library that allows you to easly generate urls for load testing your applications. The purpose is to help you create flexible and crazy test scenarios that, most of the time, are not possible with conventional tools. This tool uses the wkhtmltopdf binary included in the project and requires the use of FireFox (or another browser supporting the pdf creation feature)
Open Source Performance Testing Tools
JMeter
The Apache JMeter application is open source software. It is a pure Java application designed to load test an application and measure its performance. Read More »
Gatling
Gatling is a highly capable load testing tool. It is designed for ease of use, maintainability and high performance. Read More »
Locust
Locust is an easy-to-use, distributed, user load testing tool. It is intended for load-testing websites (or other systems) and figuring out how many concurrent users a system can handle. Read More »
Tsung
Tsung is an open-source multi-protocol distributed load testing tool. It can be used to stress HTTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP, MQTT, and Jabber/XMPP servers. Read More »
Siege
Siege is an HTTP load testing and benchmarking utility. Siege supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. It lets its user hit a server with a configurable number of simulated clients. Read More »
Httperf
Httperf is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. Read More »
Taurus
Although not specifically related to Perf testing, Taurus provides an automation-friendly framework for continuous testing, including functional and performance. Read More »
Artillery
Artillery is a modern, powerful & easy-to-use load testing and functional testing toolkit. Use it to ship scalable applications that stay performant & resilient under high load. Read More »
Goad
Goad takes full advantage of the power of Amazon Lambdas for distributed load testing. You can use goad to launch HTTP loads from up to four AWS regions at once. Each lambda can handle hundreds of concurrent connections, able to achieve peak loads of up to 100,000 concurrent requests. Read More »
Apache Bench
ab
is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. Read More »
Free Load Testing Tools vs. Open Source Load Testing Tools – What’s the Difference?
Have you ever wondered why those free web load testing tools or that free test script software you’re using are considered free and not an open source? Is there even a difference? The terms free software and open source software are now often use interchangeably, so they often get easily confused with each other. Many people often think that anything that is described as open source software is free, and vice versa, anything that is free also means that it’s also open source software. While the terms have certainly evolved over time, there are some inherent differences. We’ll briefly explain some of the major differences here to help clear up some of the confusion.
Free Software
While software itself is not typically viewed as political in nature, one of the interesting aspects of the term free software is that it originally came about as a social movement, focused on providing moral collaboration. The term itself didn’t come about until the mid 1980’s when a non-profit organization, called the Free Software Foundation (FSF), really championed the cause behind free software. They explain it in a succinct way, saying “think of it more as the ‘free’ in ‘free speech’.” Over time, the FSF was able to gain traction and solidify their initiative. Under the requirements set by the FSF, everyone wishes to should have the freedoms to run, copy, distribute, study, change, and improve upon free software.
The FSF defines free software within these four pillars of freedom:
- Freedom to run the program, as you wish, for whatever purpose.
- Freedom to study the source code of the program, so that carries out the computing your intending it to do. Having the source code, not just the executable, is mandatory.
- Freedom to redistribute exact copies. Make the copies and give them away and sell them when you wish
- Freedom to redistribute modified copies
So, the next time you’re searching for in the market for free load testing tools for web applications or a free stress test tool, you might want to take some time to do your due diligence. And even though it pains everyone to read the license agreements, you might come across some interesting tidbits about the software you want to use, ensuring that what you want to do, you’re allowed to do.
Open Source Software
Open source software refers to a methodology or values, rather and a movement, focused on providing economical collaboration. Open source software is much more than just getting access to source code. Unlike proprietary software, where only the original authors can legally modify or copy, open source software authors make the source code available to all to alter, copy, and share it. There’s still a license agreement users must to agree to, but there are ultimately no legal ramifications to using, studying, copying, or modifying open source software like you would have with proprietary software.
However, it really comes down to the specific license agreement. While nearly all open source software is free (and that’s how most people understand it), it doesn’t mean programmers can’t charge users for their version of the software. For example, a programmer may provide the software for free, but charge users a support or development fee, ensuring that users get continued support for the software they’re using. Some examples of popular open source software include WordPress, Apache HTTP Server, Mozilla Firefox, and Chromium.
Like the free software movement created by the FSF, the Open Source Foundation was founded in the late 1990s to help promote open source software. Their site lists these 10 criteria, along with detailed guidelines and rules, for what defines open source software.
The Debate Continues
Over time, the two terms have become more synonymous with each other and parties of both sides continue to debate the argument of free software versus open source software. A quick search will result in enough articles and debate to last years. Due to the gray area between free and open source, new terms have been introduced, such as FOSS (Free and Open Source Software or FLOSS (Free/Libre and Open Source Software) to help make the differentiation easier for everyone. Ultimately, there’s likely no difference between that free web application load testing software or the free web server load testing software you’re looking for, however, what you want to do with it versus what you can do with it makes all the difference.
Dmitri TikhanskiDmitri Tikhanski is a Contributing Writer to the BlazeMeter blog.
Become a JMeter and Continuous Testing ProStart Learning
Subscribe to our blog
Get the latest posts in your emailFollow Us
Test Your Website Performance NOW! Enter Your URL to Get Started|
Load TestingSep 29 2021
Open Source Load Testing Tools 2021
BlazeMeter Blog readers might remember the post regarding Open Source Load Testing Tools, which highlighted the main features of the most outstanding performance testing tools in 2013.
They were:
However in the world of software development (and associated testing) things are changing quite fast. New tools appear (like k6), old tools lose popularity, and so now is probably a good time to revisit the open source performance tools list and see what is the current situation.
As per Google Trends report Apache JMeter is still the most popular tool and there is growing interest in Locust and k6 .
Open SourceLoad Testing Tools Feature Comparison Matrix
So let’s see what JMeter, Gatling, Locust and k6 look like in 2021
Feature | JMeter | Gatling | Locust | k6 |
OS | Any | Any | Any | Any |
GUI | Yes | Recorder only | No | No |
Test Recorder | HTTP Siebel Mainframe Citrix | HTTP | No | HTTP |
Programming/Extension Language | Groovy (recommended) and any other language JSR223 specification supports | Scala | Python | JavaScript |
Load Reports | Console, HTML, CSV, XML | Console, HTML | Console, HTML | CSV, JSON |
Protocols | HTTP FTP JDBC SOAP LDAP TCP JMS SMTP POP3 IMAP | HTTP MQTT JMS | HTTP | HTTP gRPC |
System under test monitoring | With plugin | No | No | No |
Clustered mode | Yes | No | Yes | No |
Open Source Test Tools Throughput Comparison
Thanks to the Taurus automation framework, BlazeMeter now supports all these tools so we can compare their resource consumption using BlazeMeter engines and see how they behave.
In the previous blog post it was 20 virtual users x 100,000 iterations. Given that Locust doesn’t support running requests with a fixed number of iterations, let’s run 20 virtual users for 1 minute with each tool and see how many requests will be executed and what is the associated CPU and memory footprint in the BlazeMeter Engine.
Anyone with the BlazeMeter Free Tier will be able to replicate the test execution and get the results.
The test is really not complex: a single simple HTTP GET request to the host with Apache HTTP Server installed, hitting the default landing page. The load testing tool repeats the requests as fast as it can. Every tool is being run via Taurus so the test consists of:
- The specific script for the load testing tool.
- The Taurus YAML configuration file, instructing BlazeMeter how to run the script.
Apache JMeter
JMeter scripts are basically XML files so the script body looks kind of scary:
<?xml version="1.0" encoding="UTF-8"?> <jmeterTestPlan version="1.2" properties="5.0" jmeter="5.4.1"> <hashTree> <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true"> <stringProp name="TestPlan.comments"></stringProp> <boolProp name="TestPlan.functional_mode">false</boolProp> <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp> <boolProp name="TestPlan.serialize_threadgroups">false</boolProp> <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true"> <collectionProp name="Arguments.arguments"/> </elementProp> <stringProp name="TestPlan.user_define_classpath"></stringProp> </TestPlan> <hashTree> <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true"> <stringProp name="ThreadGroup.on_sample_error">continue</stringProp> <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true"> <boolProp name="LoopController.continue_forever">false</boolProp> <stringProp name="LoopController.loops">5000</stringProp> </elementProp> <stringProp name="ThreadGroup.num_threads">20</stringProp> <stringProp name="ThreadGroup.ramp_time">1</stringProp> <boolProp name="ThreadGroup.scheduler">false</boolProp> <stringProp name="ThreadGroup.duration">60</stringProp> <stringProp name="ThreadGroup.delay"></stringProp> <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp> </ThreadGroup> <hashTree> <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP Request" enabled="true"> <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" enabled="true"> <collectionProp name="Arguments.arguments"/> </elementProp> <stringProp name="HTTPSampler.domain">129.159.202.229</stringProp> <stringProp name="HTTPSampler.port"></stringProp> <stringProp name="HTTPSampler.protocol">http</stringProp> <stringProp name="HTTPSampler.contentEncoding"></stringProp> <stringProp name="HTTPSampler.path"></stringProp> <stringProp name="HTTPSampler.method">GET</stringProp> <boolProp name="HTTPSampler.follow_redirects">true</boolProp> <boolProp name="HTTPSampler.auto_redirects">false</boolProp> <boolProp name="HTTPSampler.use_keepalive">true</boolProp> <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp> <stringProp name="HTTPSampler.embedded_url_re"></stringProp> <stringProp name="HTTPSampler.connect_timeout"></stringProp> <stringProp name="HTTPSampler.response_timeout"></stringProp> </HTTPSamplerProxy> <hashTree/> </hashTree> </hashTree> </hashTree> </jmeterTestPlan>
However, you can also open the script in the JMeter GUI and it will make more sense:
Here is the Taurus YAML declarative script that overrides the values that are defined in JMeter Thread Group:
--- execution: - executor: jmeter concurrency: 20 hold-for: 1m scenario: script: jmeter-script.jmx provisioning: cloud modules: cloud: test: JMeter report-name: JMeter 20 users for 1 minute project: Load Testing Tools 2021
Summary Page
Request Stats Page
Engine Health Page
Gatling
Gatling scripts are Scala source files so they are kind of more readable than JMeter XML.
import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ class Gatling extends Simulation { val httpProtocol = http .baseUrl("http://129.159.202.229/") val scn = scenario("BasicSimulation") .exec( http("http://129.159.202.229/") .get("/") ) setUp( scn.inject( constantConcurrentUsers(20).during(60.seconds), ).protocols(httpProtocol) ) }
Here is the relevant Taurus YAML, no overrides, just telling Taurus what to run:
execution: - executor: gatling scenario: gatling scenarios: gatling: script: gatling-script.scala simulation: Gatling provisioning: cloud modules: cloud: test: Gatling report-name: Gatling 20 users for 1 minute project: Load Testing Tools 2021
Summary Page
Request Stats
Engine Health
Locust
Locust scripts are written in Python so they’re most probably the easiest to read and understand.
from gevent import sleep from re import findall, compile from locust import HttpUser, TaskSet, task, constant class UserBehaviour(TaskSet): @task(1) def generated_task(self): self.client.get(timeout=30.0, url="/") class GeneratedSwarm(HttpUser): tasks = [UserBehaviour] host = "http://129.159.202.229/" wait_time = constant(0)
And again the associated Taurus YAML configuration:
execution: - executor: locust concurrency: 20 hold-for: 1m scenario: example scenarios: example: default-address: http://129.159.202.229/ script: locust-script.py provisioning: cloud modules: cloud: test: Locust report-name: Locust 20 users for 1 minute project: Load Testing Tools 2021
Summary Page
Request Stats Page
Engine Health Page
K6
K6 tests are written in JavaScript so again our simple test is very small:
import http from 'k6/http' export default function () { http.get('http://129.159.202.229') }
Even the Taurus YAML file is bigger:
--- execution: - executor: k6 concurrency: 20 hold-for: 1m scenario: k6 scenarios: k6: script: k6.js provisioning: cloud modules: cloud: test: k6 report-name: k6 20 users for 1 minute project: Load Testing Tools 2021
Unfortunately as of now BlazeMeter doesn’t support k6 results interpretation very well, so the metrics will be obtained from BlazeMeter Logs page.
/\ |‾‾| /‾‾/ /‾‾/ /\ / \ | |/ / / / / \/ \ | ( / ‾‾\ / \ | |\ \ | (‾) | / __________ \ |__| \__\ \_____/ .io execution: local script: /tmp/artifacts/k6.js output: csv (/tmp/artifacts/kpi.csv) scenarios: (100.00%) 1 scenario, 20 max VUs, 1m30s max duration (incl. graceful stop): * default: 20 looping VUs for 1m0s (gracefulStop: 30s) running (0m01.0s), 20/20 VUs, 140 complete and 0 interrupted iterations default [ 2% ] 20 VUs 0m01.0s/1m0s lines like above were removed to keep the log file short running (1m01.1s), 00/20 VUs, 11049 complete and 0 interrupted iterations default ✓ [ 100% ] 20 VUs 1m0s data_received..................: 124 MB 2.0 MB/s data_sent......................: 895 kB 15 kB/s http_req_blocked...............: avg=1.16ms min=1.34µs med=2.93µs max=111.57ms p(90)=6.31µs p(95)=9.12µs http_req_connecting............: avg=1.15ms min=0s med=0s max=111.49ms p(90)=0s p(95)=0s http_req_duration..............: avg=109.13ms min=106.07ms med=106.84ms max=1.1s p(90)=107.64ms p(95)=108.18ms { expected_response:true }...: avg=109.13ms min=106.07ms med=106.84ms max=1.1s p(90)=107.64ms p(95)=108.18ms http_req_failed................: 0.00% ✓ 0 ✗ 11049 http_req_receiving.............: avg=201.07µs min=30.27µs med=95.11µs max=53.55ms p(90)=247.15µs p(95)=482.01µs http_req_sending...............: avg=32.93µs min=7.71µs med=16.13µs max=18.68ms p(90)=39.59µs p(95)=59.41µs http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s http_req_waiting...............: avg=108.89ms min=105.98ms med=106.65ms max=1.1s p(90)=107.41ms p(95)=107.86ms http_reqs......................: 11049 180.834133/s iteration_duration.............: avg=108.62ms min=106.15ms med=106.98ms max=244.45ms p(90)=107.82ms p(95)=108.55ms iterations.....................: 11049 180.834133/s vus............................: 20 min=20 max=20 vus_max........................: 20 min=20 max=20
Test Tools Results Comparison
Tool | Requests | Response Time | Bandwidth |
JMeter | 5580 | 214 | 1019 |
Gatling | 5573 | 213 | 1017 |
Locust | 10544 | 112 | 1873 |
K6 | 11049 | 109 | 2116 |
So far so good, we have two winners and two outsiders. However, I have one doubt regarding the Bandwidth reported by Locust.
If we compare Engine Health pages for JMeter and for Locust tests:
We see that the throughput reported by Locust is 3x times higher than the throughput reported by BlazeMeter. In order to exclude possible BlazeMeter bugs let’s see the network metrics on the side of the system under test.
I fail to see a good reason regarding why Locust reports 2x more requests than JMeter does and at the same time gets 2x times less bytes of data. K6 results are in line with the system under test metrics. It’s yet another evidence that you need to pay attention to literally everything and not only focus on the KPIs which your load testing tool gives as it never tells the full story.
So the winner seems to be k6. However, I have one obvious question: how come that response time is 2x times less assuming that JMeter and Gatling are sending the same requests as k6 does? You’re welcome to share your thoughts in the comments section below.
You can run all of these test scripts in BlazeMeter to achieve scalability and advanced reporting. Start now.
CONCLUSION
The idea of building a load testing framework that allowed teams to seamlessly write tests across multiple programming languages, turned out to be a monumental task. This meant that it had to be integrated with each and every programming language that was important in the market place.
The project is hosted at GitHub, where you can contribute your code. Developed in Java and based on the Spring framework, it is compatible with Apache, Borland and IBM Test Factory.