Performance testing is a crucial process in the course of an application’s development. Performance testing assists a development team with information needed to see how software or web applications will perform under a specific workload. It is important that performance testers are looking at the right metrics to really understand how software will perform in the real world. When carrying out a performance test, there are specific requirements that are crucial for the developers to measure so that they know the product will perform under pressure. So what are the requirements for performance testing?
Let’s review the common requirements for performance testing, examples of these requirements, and review best testing practices.
What are the requirements for performance testing?
Performance testing is a progressive, repetitive process. Any given test will uncover some bottleneck or design issue with an application or environment, but there may still be many issues remaining that are masked or blocked by one issue that was just uncovered. Until this first discovered bottleneck is removed, the hidden system performance problems will remain hidden.
Since performance testing is an iterative process of testing, uncovering bottlenecks or other issues, resolving those issues, and then retesting, each of these cycles of test and remediation is a round of testing that requires a follow-up effort to ensure that the uncovered problems have been resolved.
There are many ways to measure speed, scalability, and stability. The required metrics often measured in performance testing are:
- Response time: This is the total time to send a request and get a response.
- Time to first buffer or wait time: This tells developers how long it takes to receive the first byte after a request is sent. It is the measure of the time it took the server to process the request.
- Peak response time: A peak response time that is significantly higher than average will require some investigation to determine whether it is a fluke or an indicator of a larger underlying problem.
- Error rate: This calculation is a ratio of requests resulting in errors compared to all requests. Careful analysis of errors is needed to fully understand why they happened, which part of the application architecture generated them, and the conditions that led to them. Many errors are caused by bad test data and faulty assumptions used during the test script definition process.
- Maximum concurrent users: Also known as concurrency, this is the most common measure of how many active users a system will handle at any point.
- Requests Per Second: Requests per second is the measure of how many requests are sent to the server in a second. It is the rate at which work is sent to the servers in the given interval of time. This can also be expressed in many other intervals (milliseconds, hours, days) depending on the customer’s business needs.
- Total requests: The final sum of all requests submitted to the server during a test. This number is helpful when calibrating your tests to meet specific load models.
- Passed or failed transactions/requests: A measurement of the total numbers of successful or unsuccessful transactions and requests. Generally, a transaction is a logical unit of business work. A business transaction may have one or more requests associated with it. A request is a unit of work sent to a server that we expect a response for.
- Throughput: This is the amount of data sent to and from the application servers over time. Throughput shows the amount of bandwidth used during the test and can be measured by kilobytes, megabits, or another measurement.
- CPU utilization: The amount of CPU required to support the workload of the test. It is one indicator of server performance under load.
- Memory utilization: The amount of memory required to support the workload of the test. It is another indicator of server performance under load.
Performance test example
For a solid understanding of performance testing, here are a few performance test examples:
- An e-commerce website creates thousands of new accounts within a specified period of time to test the maximum workload it can handle before bogging down.
- An e-retailer has a new ad campaign that will go live soon. They expect the link used in the advertisements to generate exceptionally high server demands. How high can the servers go before there is a problem? Is that high enough? What would it take to double the capacity of the current system?
- A game developer tests a server for a new multiplayer game by having thousands of test accounts log on at the same time to test how many requests it can handle simultaneously.
Performance testing life cycle
The performance testing life cycle has seven steps:
- Identify the testing environment: Also known as the test bed, a testing environment is where software, hardware, and networks are set up to execute performance tests. Identifying the hardware, software, network configurations, and tools available allows the testing team to design a test that measures the required metrics and may even be repeatable in future use cases.
- Identify performance goals: In addition to identifying metrics such as response time, throughput, and constraints, identify the requirements and success and failure criteria for performance testing.
- Plan and design performance tests: Identify performance test scenarios that take into account user variability, test data, and target metrics.
- Identify Data required for testing: Data will make or break your testing effort. Tasks required to manage your test data include:
- Identifying the types of data needed for each business scenario.
- Identifying and meeting the conditions of business requirements for the required data
- Identifying how to gather the data for testing
- Identifying how to load the test (or production) systems with the necessary data
- Identifying how to clean-up data and restore it after testing
- Configure the test environment: Prepare the elements of the test environment and instruments needed to monitor resources.
- Implement your test design: Taking the test design that the test team has developed, create the tests.
- Execute tests: In addition to running the performance tests, monitor and capture the data generated.
- Analyze, report, retest: Analyze the data and share the findings. Run the performance tests multiple times using the same parameters and different parameters.
Performance testing best practices
An important principle in performance testing is testing early and often. A single test will not tell developers all they need to know. Performance testing requires a progression of tests that target specific business requirements. In that process, you will find issues that require resolution, and then you must also validate that those issues have been resolved. Do not wait to the end of the development cycle and rush performance testing as the project winds down or is completed — there is value in load testing the various key functional components of a system before they are fully integrated.
In addition to repeated testing, here are 9 performance testing best practices:
- Involve developers, IT, and manual testers in creating a performance engineering practice.
- Remember real people will be using the software that is undergoing performance testing. Determine how the results will affect users, not just test environment servers.
- Go beyond performance test parameters. Develop a model by planning a test environment that takes into account as much user activity as possible.
- Always capture baseline measurements, because they provide the starting point for determining success or failure as you push system limits with testing.
- Performance tests are best conducted in test environments that are as close to the production systems as possible or ideally in production or a disaster recovery environment.
- Isolate the performance test environment from the environment used for quality assurance testing. Don’t allow functional or user acceptance testing to corrupt or negatively impact your performance testing process. If those activities must take place in parallel, ensure that you have an isolated performance testing environment to minimize result contamination and corruption caused by uncontrolled activities that are outside of performance testing.
- No performance testing tool can test every application architecture, and limited resources may restrict choice even further. Research performance testing tools to find the appropriate tool for your application architecture and needs.
- You can never gather too much data. Use tools that will give you a wide range of statistics like minimums, maximums, averages, standard deviations. The totality of what is absorbed from all the data available will contribute to what you need in order to perform analysis and make recommendations.
- Know the audience when creating reports that share the findings and results of the performance testing efforts, and craft your reporting accordingly. Including system and software change logs in the reports is highly recommended.
Count On Foulk Consulting For Reliable Software Performance Testing
Since 2002, Foulk Consulting has been committed to giving businesses the power to solve problems through technology. Our industry-proven performance engineering experts help you determine the testing methods and metrics that will give you a clear look at how your software will respond to real-world use. Our services are available along a spectrum from ad-hoc consulting all the way to mature performance engineering processes and ongoing consulting to support your product from initial release through every update that follows. Allow Foulk to help you assess your needs and capabilities, to find the performance testing tool that best fits what your business needs today.
Whether you are trying to get ahead of performance issues or have already experienced defects that you need to follow up on, we can help. Contact us today to share your story and learn the solutions we can provide.