All websites and applications, and I mean all, once delivered to the client — needs to be proven for performance standards. Performance or benchmark testing is an always ongoing function of software quality assurance that extends throughout the life cycle of the project. To build standards into the architecture of a system — the stability and response time of an application is extensively tested by applying a load or stress to the system.
Essentially, ‘load’ speaks to the number of users using the application while ‘stability’ refers to the ability of the system to withstand the load created by the intended number of users,whereas ‘response time’ hints at the time taken to send a request, run the program, and receive a response from a server.
Testing can be a challenging ordeal, if a performance testing strategy is not predetermined. The tasks involved in testing requires multifaceted skill-sets from writing test scripts, monitoring and analyzing test results, to tweaking custom codes and scripts, and developing automated test scenarios for the actual testing.
So, is testing really necessary?
An important outcome of quality testing is to ensure that the system is reliable, built for capacity and scalable. In order to achieve this, there are several stakeholders involved who decide how much budget needs to be invested based on business impact.
This brings to question — how do we predict traffic based on past trends? and how can we make the system more efficient to handle traffic without any dropouts? Also, if and when we hit peak loads, then how are we going to address the additional volume? For this, a performance testing strategy needs to be outlined beforehand.
When performance testing is done right, it..
1.Identifies issues early on, before they become too costly to resolve.
2.Reduces development cycles, produces better quality and more scalable code.
3.Prevents revenue and credibility loss due to poor web site performance.
4.Enables intelligent planning for future scaling.
5.Ensures that the system meets performance expectations (response time, throughput, etc.) under designed levels of load.
6.Exposes bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
More importantly, manual testing is usually not a preferred approach because it can be quite expensive, requiring large amounts of personnel and hardware, quite complex to co-ordinate and synchronise multiple testers, and repeatability is quite limited.
In order to find the stability and response time of each API, we can test different scenarios by varying the load at different time intervals on the application. We can then automate the application by using any performance testing tool.
Performance Testing Tools
There are a varied bunch of tools available to testers, such as
Open Source Tools — Open STA Diesel Test TestMaker Grinder LoadSim Jmeter Rubis.
Commercial — LoadRunner Silk Performer Qengine Empirix e-Load.
Among these, the most commonly used tool is J-Mmeter because it is a 100% Java desktop application with a graphical interface that uses the Swing graphical API. It can therefore run on any environment / workstation that accepts a Java virtual machine, for example: Windows, Linux, Mac, etc.
We can automate the application by integrating the ‘selenium scripts’ in J-Meter tool. (The software that can perform load tests, performance-functional tests, regression tests, etc. on different technologies.)
If the project is large in scope and the number of users keeps increasing day-by-day then the load on the server side will be greater. In order to test such situations — Performance testing is used to identify at what point, will the application crash. To find the number of errors and warnings in the code, we use the J-meter tool.
How J-Meter Works
J-Meter simulates a group of users sending requests to a target server, and returns statistics that show the performance/functionality of the target server/application via tables, graphs, etc.
Take a look at the following figure that depicts how J-Meter works:
The J-Meter performance testing tool can be used to find the performance of any application (no matter whatever the language used to build the project).
First, a test plan is needed – which describes a series of steps that the J-Meter will execute when run. A complete test plan will consist of one or more thread groups, samplers, logic controllers, listeners, timers, assertions and configuration elements.
The ‘thread’ group elements are the beginning of any test plan. Thread group element controls the number of threads, the J-Meter will use during the test run. We can also control the following via thread group: setting the number of threads, setting the ramp-up time, and setting the loop count. The number of threads implies the number of users to the server application, while ramp-up period defines the time taken by J-Meter to get all the threads running. Loop count identifies the number of times to execute the test.
Once the ‘thread’ group is created and we define the number of users, iterations and ramp-up time (or usage time), we can create virtual servers depending on the number of users defined in the thread group and start performing the action based on the parameters defined. Internally J-Meter will record all the results like response code, response time, throughput, latency, etc. and produces the results in the form of graphs, trees and tables.
The J-Meter has two types of controllers: Samplers and Logic controllers Samplers allow the J-Meter to send specific requests to a server, while the latter controls the order of processing of samplers in a thread. They can change the order of requests coming from any of their child elements. Listeners, are then used to view the results of samplers in the form of reporting tables, graphs, trees or simple text in some log files.
It is important to remember, performance testing should always be done by changing one parameter at a time, so that response and throughput metrics can be monitored and discrepancies corrected accordingly. The real purpose of testing is to ensure that the application or site is functional for businesses to deliver real value to their users — so test practically, and think like a real user.