1.7 C
London
Thursday, November 21, 2024

Performance Testing, in details

Performance testing has become a cliché term for many testers, often to the extent of generalizing it. Performance testing, however, is much more than determining the efficiency of a software under stress conditions.

Experts believe that performance testing can be broadly categorized into: Proactive, Reactive, and Passive. Each have their distinct time, purpose, focus, and audience. Many testers are not aware of the time and reason they are performed.

Proactive Testing: This is the primary form of testing that engineers should do at the development stage itself. It is very intentional and deliberate and is an active attempt to identify performance issues.

The idea is to spot the problem before it gets problematic. Capturing performance issues at the development stage is much easier and cheaper. The reason it is referred to as proactive hosting is because the tester goes out of the way in detecting the issue.

Reactive: It is impossible to fix or find every performance issue during the development phase. Reactive style of testing is a slightly defensive style of testing. Reactive testing is usually done in response to an event in the development lifecycle, such as a build or a release. It should be reasonably automated and should be carried out in as live-like an environment as possible. This synthetic testing allows one to measure regressions before they are released for production. Moreover, the business can decide whether the regression is severe enough to delay a release or whether we prioritize its solution in the next sprint. Accordingly, Reactive testing carries partial organisational visibility. In fact, both developers and product people would be aware of and responsive to reactive performance tests.

Passive: Things can go wrong even after two rounds of testing. Once the system is launched, it is time to sit back and see the behaviour of the application. The things that are kept in mind include the differences of performance due to geographical change or whether changes in performance correlating to changes in business metrics. For this, we need to turn to Real User Monitoring. Issues that have been detected at this stage are difficult to be fixed since the configurations have to be changed, apart from the fact that it will also take longer to fix since data have to be gathered and assessed.

Performance Testing tools

Having discussed the above, it is also important to understand the mainstream requirements of performance testing.

Some of them are:

  • Capacity Testing
  • Load Testing
  • Volume Testing
  • Stress Testing
  • Soak Testing
  • Spike Testing

Tools for performance testing

Some of the most popular tools for performance testing are:

  • HP LoadRunner
  • Apache JMeter
  • WebLoad
  • LoadView
  • Appvance
  • Loadster
  • Loadstorm
  • CloudTest
  • Httperf
  • OpenSTA
  • WAPT
  • NeoLoad

Performance Testing process

Given below are the seven steps that are performed during performance testing:

  1. First a proper requirement study is done, followed by analyzing test goals and objectives. Also, the testing scope along with a test initiation checklist is determined. Identifying the logical and physical production architecture for performance testing, identifying the software, hardware, and network configurations required to initiate the performance testing form an integral part of this step. Both the test and the production environments, while identifying the testing environment, are compared.
  2. The desired performance characteristics of the application like response time, throughput, and resource utilization must be identified.
  3. Some of the other key factors that need to be identified include key usage scenarios, determining appropriate variability across users, identifying and generating test data, and specifying the metrics to be collected. Finally, these items provide the foundation for workloads and workload profiles.
  4. Before the execution of the test, the conceptual strategy, available tools, and the designed tests must be prepared. Also, the test environment must be configured. This includes preparing the test environment, tools and resources, necessary to implement each strategy as features and components are made available for the test.
  5. A performance test needs to be created according to the test planning and design.
  6. Executing the test comes next where the tests are run and monitored. Validating the tests, test data, and collecting the results are the next steps.
  7. Then the results are consolidated and data is shared. The data must be analysed both individually and as a cross-functional team. The remaining tests must be reprioritized and re-executed. When all the metric values are within limits and accepted and none of them have violated the thresholds, the test in that scenario on that configuration is over. It is time to collect the desired information.

 

QualiTest Group is the world’s largest software testing and quality assurance company. QualiTest offers quality assurance and software testing services and is committed to the highest standards of quality in every project.

Latest articles

Related articles