Performance Testing Best Practices

When you’re conducting performance testing, you need to maximize the ROI of your time and effort. For clients, this translates into minimizing costs and maximizing revenue. For agencies, this translates into winning contracts and increasing your client base. This guide will help you identify best practices when it comes to performance testing so you can accelerate the delivery of your projects and make a positive impact on your business.

 When it comes to performance testing, there is quite a bit to cover. Performance testing is an important part of your end-to-end quality assurance (QA) and performance monitoring process. Well adapted and continually executed performance tests can tell you a lot about your applications – where the problems area and can help you improve customer experience.

10 Performance Testing Best Practices

1. Test Early and Often

Performance testing is often an afterthought, performed in haste late in the development cycle, or only in response to user complaints. Instead, you should be proactive. Take an agile approach that uses iterative testing throughout the entire development life cycle. Specifically, provide the ability to run performance “unit” testing as part of the development process – and then repeat the same tests on a larger scale in later stages of application readiness. Use performance testing tools as part of an automated pass/fail pipeline, where code that passes moves through the pipeline and code that fails is returned to a developer.

2. Consider Users, Not Just Servers

Performance tests often focus solely on the performance of servers and clusters running software. Don’t forget that people use software, and performance tests also should measure the human element. For instance, measuring the performance of clustered servers may return satisfactory results, but users on a single, troubled server may experience an unsatisfactory result. Tests should take user experience into account, and user interface timing should also be captured along with server metrics.

3. Understand Performance Test Definitions

It’s crucial to have a common definition for the types of performance tests that should be executed against your applications, such as:

  • Single User Tests. Testing with one active user yields the best possible performance, and response times can be used for baseline measurements.
  • Load Tests. Understand the behavior of the system under average load, including the expected number of concurrent users performing a specific number of transactions within an average hour.
  • Peak Load Tests. Understand system behavior under the heaviest anticipated usage for concurrent number of users and transaction rates.
  • Endurance (Soak) Tests. Determine the longevity of components, and whether the system can sustain average to peak load over a predefined duration. Monitor memory utilization to detect potential leaks.
  • Stress Tests. Understand the upper limits of capacity within the system by purposely pushing it to its breaking point.
  • High Availability Tests. Validate how the system behaves during a failure condition while under load. There are many operational use cases that should be included, such as seamless failover of network equipment or rolling server restarts.

5. Build a Complete Performance Model

Measuring your application’s performance includes understanding your system’s capacity. This includes planning what the steady state will be in terms of concurrent users, simultaneous requests, average user sessions and server utilization during peak periods of the day. Additionally, you should define performance goals, such as maximum response times, system scalability, user satisfaction scores, acceptable performance metrics and maximum capacity for all of these metrics.

6. Define Baselines for Important System Functions

In most cases, QA systems performance don’t match production systems performance. Having baseline performance measurements for each system can give you reasonable goals for each testing environment. These baselines provide an important starting point for response time goals, especially if there are no previous metrics, without having to guess or base them on the performance of other applications.

7. Perform Modular and System Performance Tests

Modern applications are incorporate many individual, complex systems, including databases, application servers, web services, legacy systems and so on. All of these systems need to be performance tested individually and together. This helps expose weaknesses, highlight interdependencies and understand which systems you should isolate for further performance tuning.

8. Measure Averages, but Include Outliers

When testing performance, you need to know average response time, but this measurement can be misleading by itself. Be sure to include other metrics, such as 90th percentile or standard deviation, to get a better view of system performance.

KPIs can be measured by looking at average and standard deviations. For example, set a performance goal for the average response time plus one standard deviation beyond it (see Figure 1). In many systems, this improved measurement affects the pass/fail criteria of the test, matching the actual user experience more accurately. Transactions with a high standard deviation can be tuned to reduce system response time variability and improve overall user experience.

application

9. Consistently Report and Analyze the Results

Performance test design and execution are important, but test reports are, too. Reports communicate the results of your application’s behavior to everyone in your organization, especially project owners and developers. Analyzing and reporting results consistently also helps to determine future updates and fixes. Remember to consider your audience and tailor reports to each audience. Reports for developers should differ from reports sent to project owners, managers, corporate executives and even customers, if applicable.

10. Triage Performance Issues

Providing the results of performance tests is fine, but those results, especially when they demonstrate failure, are not enough. The next step should be to triage the code/application and system performance, and involve all parties: developers, testers and operations personnel involved. Application Monitoring Tools can provide clarity regarding the effectiveness of triage.

Everyone knows performance testing is important, but how do you make your tests realistic?

I had the honor of addressing this topic at the Velocity Conference in New York. Here are five tips I shared.

1. Set a baseline for user experience

Performance is not merely a question of load times and application responsiveness. What you really want to know is: How satisfied are my users?

Our team gave this measurement a name: FunDex. The higher the FunDex is, the more positive the user experience is. Improving performance gets you FunDex points, but app crashes and hogged resources take them away. Put another way, decreasing page load time at the expense of stability is not a sustainable solution.

We take millions of data points continuously to track FunDex over time, giving the development team a rolling look at whether changes to the code are improving or detracting from the user experience. Whether you have a single data point like FunDex or not, the point is that your performance testing strategy needs to be more holistic than simply looking at page load times. It needs to consider the entire user experience.

2. Create realistic tests

Throwing thousands or millions of clients at a server cluster may stress-test your environment, but it is not going to accurately measure how your app or site performs in a real-world scenario. There are two major issues you need to consider when setting up your testing environment.

First, the testbed must reflect the variety of devices and client environments being used to access the system. Traffic is likely to arrive from hundreds of different types of mobile devices, web browsers, and operating systems, and the test load needs to account for that.

Also, this load is far from predictable, so the test needs to be built with randomness and variability in mind, mixing up the device and client environment load on the fly. By continuously varying the environment and the type of data that is passed, the development organization faces fewer surprises down the road after the application is put into production.

Second, the simulation can’t start from a zero-load situation. Many test plans start from a base, boot-up situation, then begin adding clients slowly until the desired load is reached. This simply isn’t realistic and provides the testing engineer an inaccurate picture of system load. As applications are updated and rolled out, the systems they’re running on will already be under load. That load may change over time, but it won’t go to zero and slowly build back up.

3. Know the difference between measured speed and perceived performance

Performance may mean one thing to you, but another thing to your user. If you are simply measuring load times, you’re missing the big picture. Your users aren’t waiting for pages to load with stopwatches in hand. Rather, they are waiting for the app to do something useful.

So how quickly can users get to useful data? To find out, you need to include client processing time in your measure of load times. It is easy to “cheat” on a performance test by pushing processing work from the server to the client. From the server standpoint, this makes pages appear to load more quickly. But forcing the client to do extra processing may actually make the real-world load time longer.

It isn’t necessarily a bad strategy to push processing to the client. But you must take the impact on perceived speed into account during testing. Remember: Measure performance from the user’s perspective, not the server’s.

4. Correlate performance issues to underlying problems

Let’s say you’ve built a robust testing environment and have a solid and thorough understanding of performance from a user perspective. Now what? To be effective, your testing strategy must correlate performance bottlenecks with the code that’s creating problems. Otherwise, remediation is very difficult.

In one recent test example, we found that a single page was generating four different REST calls, resulting in 62 database queries, 28 of which were duplicates. That is a massive amount of processing power—and a lot of wasted time and cycles—for a single page. Multiply that over thousands and thousands of page views and you can easily see how the environment could be improved by optimizing these calls.

Our team solved the problem in two ways. First, it used a caching system to ensure that the duplicate database queries didn’t result in a fresh call to the database. Second, work was done to optimize the remaining queries and improve their efficiency. This is of course all part of the best practices for any application design project. However, the team could only isolate the problems by testing the system under a realistic workload and tracing the bottlenecks back to the code.

5. Make performance testing part of agile development

All too often, performance testing has been isolated in its own tower and left until the end of a development project. At that point it’s probably too late to fix issues easily. Any problems discovered are likely to significantly delay your project by throwing development into firefighting mode.

To avoid this problem, make performance testing part of the agile development process. That way you can find problems and fix them quickly.

Specifically, testing must be integrated into development, which means performance engineering should be represented in scrum meetings on a daily basis and tasked with the responsibilities of measuring and tracking performance as the code is developed, within the same development cycle. Leaving testing for the end of the development process is too late.

Think outside the box

Put it all together, and the key to realistic testing is to take a broad view of performance. Do you know what your users care about? Have you thought about the infrastructure you will need for realistic tests? Do you know how to trace problems back to their source? Do you have a plan for collaborating with your developers? Think big, and your testing problems will get a lot smaller.

Keep learning

CONCLUSION

Good information architecture is not just about structuring your pages well. It’s also about ensuring the client-side of your application performs well. Client load time and size of JavaScript snippets are critical factors that affect this. In this article, I’ll take you on a brief journey to explore the performance testing best practices that can be used to make your site faster, which will in turn boost your revenue.

There are plenty of articles that cover common performance issues and how to fix them, but I found it difficult to find best practices for performance testing. However, at a recent conference someone recommended me a book called Performance Testing by Vikram Joshi and Gaurav Banga. It’s a very comprehensive resource on how perform testing from different perspectives.

Leave a Comment