Performance testing is a form of software quality assurance that determines how software or web applications will perform under a specific workload. But what is performance testing? This process tests the responsiveness, speed, stability, and other performance metrics of software, websites, and systems. In short, software performance testing determines the readiness of a system to still function under less-than-ideal circumstances.
While the tech industry in general typically considers performance testing to be “non-functional,” to us, that just doesn’t make sense. Performance under all conditions is an essential business requirement. In fact, one might argue that a website or software platform is only rightfully called functional if it will operate under high demand. Anything can be created to work well in a vacuum, but it’s how a product performs in the real world that determines its success.
In this article we’ll share the most recent information about what is meant by performance testing, how this process works, and what it looks like when software performance testing is done right.
What is Meant by Performance Testing?
Performance testing is an umbrella term for lots of different software quality assurance, including:
- Load Testing measures system response time and performance as the usage increases. This might mean different users carrying out the same transaction at the same time. Though this testing should be rigorous, it would never push the system beyond the expected uses.
- Stress Testing is the next step up from load testing, pushing the platform to its absolute limits. Rather than confirming the software will perform, a stress test is meant to teach you at what point the system will fail.
- Endurance Testing confirms how the product will perform over time at normal workloads. This testing is especially important to determine memory functionality and other system problems that will emerge after the initial deployment.
- Spike Testing evaluates how the system will perform or fail when spikes in use occur repeatedly in rapid succession. Testing what happens when the workload increases quickly and then decreases back to normal can reveal issues at both ends of the spike.
- Scalability Testing determines if the product can handle gradually increasing workloads, especially in terms of managing user data. This testing can be carried out by increasing the inputs, or by keeping the workload static and changing back-end elements like CPU or memory availability.
- Volume Testing is also known as flood testing, because the system is flooded with data. This is similar to stress testing on the memory and processing side of the product.
- Security Testing is sometimes grouped with performance testing, though many development teams are now carrying out security testing as a wholly separate quality assurance process.
These types of performance testing can sometimes be carried out manually, but manual testing should really pick up where automation leaves off. Quality assurance professionals create a test environment where the software is put through its paces with test automation. Used correctly, performance testing tools help teams answer “what-if” scenarios that align with your biggest goals. For instance, you can test what happens if your orders suddenly go up by 1,000%. If and when errors occur during the test scenario, then humans can follow up on detected errors and failures to confirm the conditions and the causes.
What is a Performance Testing Example?
Performance test examples help convey the practical importance of performance testing. Here are a few performance test examples along a spectrum of maturity:
- Sending multiple jobs to a printer from multiple devices to see how it handles the requests.
- Testing a mail server by accessing all the inboxes within a specified period of time.
- Running multiple applications simultaneously on a server.
- Identify bottlenecks or conflicts between servers and systems in your infrastructure that need to perform well together.
- Creating thousands of new accounts with a set period of time on an eCommerce platform.
- Mimicking holiday events by simulating the effect of thousands of visitors coming to a website during a specified amount of time.
- Model peak load events to recreate, isolate, and identify the root cause of a production failure.
These examples reflect that hardware, software, and cloud services all need to be subject to performance testing. Otherwise any of these technologies can fail at a critical moment.
What Does A Performance Test Provide?
After completing performance testing, engineers or developers must understand and analyze their performance testing outcomes. Performance testing results analysis is the process of taking raw performance testing metrics and translating them into terms everyone can discuss and react to. Generally these metrics fall into three categories:
- Speed: Metrics in this category include load time, response time, bytes per second, queue length, throughput, and garbage collection.
- Capacity: Metrics in this category include latency/lag, wait times for database operations, the impact of concurrent users, and overall memory or CPU usage.
- Consistency: Metrics in this category include maximum active sessions, error rate, page faults, and thread counts.
It’s important for teams to understand not just the averages but how often negative events occur. Remember the main goal of quality assurance testing is knowing the risks in the platform so you can manage them. Sometimes an error or fault will be critical to address. Other times, a known error may be acceptable to address through other processes rather than redeveloping the code. Performance testing reveals what you don’t know about the product so you can anticipate catastrophes and keep the rest in mind for future releases or updates.
Performance Testing Life Cycle
The typical performance testing life cycle has seven phases:
- Risk Assessment
The first phase of performance testing, risk assessment involves determining what testing is needed on each individual component of the software, system, or application. This not only helps testers understand the full scope of testing needed, but also allows for that testing to be prioritized according to the significance of known and potential risks. This phase also creates early opportunities for conversation among developers, testers, and other stakeholders about new and existing functionalities, business flows that must be preserved, the application architecture, and other important details.
- Requirement Gathering and Analysis
Once the components that need to be tested have been identified and prioritized, the conversation continues as stakeholders define the metrics that need to be tested. These could include the number of users, error rate, memory usage, page response time, or more complicated testing pathways. This phase includes conversations among testers and non-technical stakeholders like end users, so it’s important to take the time to fully explore what everyone expects to learn from the tests, and what the ideal performance looks like.
- Performance Test Planning
Now that the requirements have been fully-defined and agreed upon by all parties, the time has come for the performance testers to plan the test(s). In this phase, the requirements are laid out in writing and matched to the different tests that will provide the needed information. This is also the time to agree on when the tests will be conducted, how long they should take, and who is responsible for performing each test. Lastly, this is an opportunity to document the identified risks, the assumptions about the risks, the issues each risk might cause, and cross-dependencies within the architecture that could cause a risk to blow up. All the stakeholders should sign the performance testing plan in acknowledgment that they understand and agree to all this information.
- Performance Test Design/Scripting
With everyone excited about the plan, the performance tests themselves must be created. This means writing the code that will mimic real-world user behavior and generate the simulation of traffic that will put the system’s performance to the test at peak real-world conditions or beyond. As they write the tests, performance testing engineers will write down the steps of user behavior that the test needs to imitate, and then set parameters like think time, which is how long the test pauses during one behavior before moving on to the next. This is meant to simulate the behavior of real-world users even further, since humans will not move from task to task as quickly as automated systems.
- Workload Modeling
Workload modeling is another element of test creation, in which the needs for each specific test scenario in the plan are identified and structured. These metrics will include at least the number of users to be simulated in the test, and how many users per hour will be tested. Other potential metrics include requests per second, targeted response time, end-to-end response time, and more depending on the unique system being tested.
- Performance Test Execution
After all the phases of discussion and planning, it is finally time to run the performance tests! This requires re-checking the tests before they are run to confirm what is being executed matches the plan. Performance test engineers must also confirm elements on the hardware side like available disk space, clean server logs, and that the test environment is stable. The tester should monitor the system throughout test execution to make sure the script is running as planned, as well as the error count and other basic metrics.
- Results Analysis, Reporting, and Recommendations
Once the tests are completed, a report on each individual test should be generated so each set of findings can be analyzed. A final test report should make the findings and their significance clear to everyone, including both technical and non-technical stakeholders. Each defect that was identified should be described in detail, so developers can seek out the cause of the issue(s). This root-cause analysis is a team effort where the tester and developer will work alongside each other to not just identify the cause of the error, but brainstorm how to address it without causing other problems. After the application is fixed up, the test should be repeated to confirm the break-fix works.
When new features or functionality are added to the product, or when events like operating system updates occur, the testing cycle will start over as new testing is necessary.
When Performance Testing is Done
Some quality assurance professionals or software engineers only conduct performance testing at the end of the development cycle or installation. But in Agile environments, it is far more productive to carry out performance testing in alignment with different development milestones. This saves the team a huge amount of rework when a core issue is detected early enough to be addressed simply. Otherwise, if foundational code is the cause of an issue at the end of development it isn’t just that element that needs to be revised, but all the subsequent parts that rely on it to function. Testing early and often to find unexpected conditions and failure models is why working with an outside quality assurance consultant can be especially beneficial to a team that is already at capacity creating the product.
Common Performance Testing Challenges
Performance testing challenges exist even for the most seasoned team of test managers and engineers. The strength of an excellent performance testing team like ours at Foulk Consulting isn’t the ability to avoid these challenges altogether, but rather to anticipate said challenges in advance and respond to them from the perspective of experience.
- Test Strategy and Coverage: Planning a strategy to test for all the requirements is the first and biggest challenge of performance testing. Early conversations in the performance testing life cycle must include brainstorming and creative thinking to plan the most efficient and useful test possible within the unique architecture of the system. The test strategy must encompass all the critical business functions, performance requirements, and other parameters.
- Selection of Testing Tools: Many teams of performance testers have their preferred tools. But the right tool for any specific project will also depend on factors like the tech stack of the software, the performance testing team’s skill level, the budget to license different performance testing tools, and other considerations and constraints. Pinpointing the right tool or tools may require extensive research, but it will yield more return on the investment of time, energy, and dollars.
- Testing Timeline & Budget: Performance testing requires a timeline and budget, but within an Agile development and deployment cycle, teams may not believe there is time for extra quality assurance steps. Planning performance testing into the workflow from the beginning of a project can offset this misconception and empower teams to release their product without compromising on quality and performance. There are many ways that experienced performance test engineers can help organizations achieve their goals with an eye to the timeline and budget constraints.
- Testing a Live Product: When a risk becomes known through an error in real-time, or a new release has the developers focused on quality assurance, testing a product or website that is live comes with its own unique challenges. Exploratory and/or diagnostic testing must be conducted in such a way that the experience of live users is not interrupted.
- Test Results Analysis: Lastly, testers need to draw on knowledge of a wide array of IT concepts to fully analyze and interpret the results of performance testing. These include, but aren’t limited to, knowledge of operating system concepts, networking concepts, firewalls, data structures, web architectures, and more.
How Can I Learn Performance Testing?
Whether you want to become a full-fledged performance tester or just deepen your understanding of testing concepts within a broader DevOps framework, there are many online or in-person training programs that can teach the basics and beyond. The International Software Testing Qualifications Board (ISTQB) is an international non-profit that certifies software testers. They offer two general knowledge testing certifications, the Foundations certification and Business Analyst certification. Beyond these two entry-level software testing certifications, there are also specialty certifications available in areas like mobile testing, security testing, and even specific industries like gambling software testing or automotive software testing.
Which Tool Is Best for Performance Testing?
Here are the tools that Foulk Consulting uses for performance testing:
- Dynatrace: This application performance management platform is equipped to carry out many types of performance testing including load testing, time-based testing of metrics like response time, front-end performance testing, and API integration testing for both functionality and security. The platform provider benefits from lessons learned through its industry-recognized legacy solution, AppMon and today fully integrates with JMeter and NeoLoad.
- NeoLoad: Speaking of NeoLoad, this tool allows for the code-less design and automation of performance tests. This allows teams throughout the development pipeline to carry out resource-intensive testing like load testing when resources are guaranteed to be available. This enables the identification of issues like performance bottlenecks even when advanced user behavior is being modeled. Even better, the automatic test updates feature means once a test is created, it can be re-used on many different projects when the same requirements need to be tested. That’s called working smarter, not harder.
- Instana: Instana is an application performance monitoring platform that helps DevOps teams pivot their entire perspective on testing. Since this tool provides real-time performance insights as changes to the product are released, this means teams are no longer anticipating future trouble, but instead reacting to the answers and improvements that are needed today.
- Flood.io: Flood.io is a load testing platform that integrates with products like JMeter, Gatling, Selenium, or Element (Puppeteer) to define user behaviors and see how they impact system performance. Flood.io’s products can run both in the cloud or through on-premises technology. Options include real browser load testing, an on-premises load generator or multi-cloud load generator orchestration, and historical performance analysis, among other opportunities.
Zenoss: Zenoss is another platform strengthened by real-time data, offering streaming alerts that notify a team when an event occurs. Integrated machine learning can also escalate high-priority events for more immediate attention. Not only will teams be able to monitor the entire IT ecosystem at scale, but they can also respond faster to ensure functional systems. This means optimized application performance, no matter how complex the system.
What Is New in Performance Testing?
Emerging technology and other innovations mean there is always something new to learn in the field of performance testing. Here are some of the current trends impacting performance testing in 2021:
- Machine Learning and Sentiment Analysis: Machine learning is changing many aspects of performance testing, from its ability to predict patterns and errors to its ability to escalate and prioritize errors. Another emerging innovation is sentiment analysis, in which a machine learning algorithm analyzes data like tickets and customer reviews to determine what your end users perceive as flaws. This allows developers to be more efficient at improving features that are in high demand.
- Self-Service Testing for Developers: While certain tests will always require a test engineer, there are also platforms putting more emphasis on giving developers insights while the code is being written. Since access to these insights isn’t limited by the user’s role, it’s easier for programmers to take action on errors, conduct performance engineering during development, and prevent future testing from becoming a significant project bottleneck.
- Chaos Engineering: Chaos engineering is an emerging approach to performance testing, where certain features or aspects of a system are pulled down and broken—on purpose—through planned experiments. This allows teams to test their assumptions about a system against what ends up being the result, and proactively address risks that might otherwise go undetected until a system failure or security compromise.
Trust Foulk Consulting With Your Performance Testing
Since 2002, Foulk Consulting has been committed to giving businesses the power to solve problems through technology. Our certified quality assurance professionals help you determine the testing and metrics that discover what real-world performance could impact your business. Our services are available along a spectrum from ad-hoc consulting all the way to mature performance engineering processes and ongoing consulting to support your product from initial release through every update that follows. Whether you are trying to get ahead of performance issues or have already experienced defects that you need to follow up on, we can help. Contact us today to share your story and learn about the solutions we can provide.