Executing performance tests is a relatively easier task than doing scripting. Because, the tool is going to do more work and the tester needs to do just a set of configurations. The 2 key configurations are user count and duration. A performance test will not usually have just 1 script running; rather a set of scripts will be executed in parallel as a combined scenario. This is to reflect different sets of users doing different operations on the same server. So it is very important for us to do proper configuration before hitting the start button.
Recollect our first few lessons in load test planning. We identify a set of most frequently used scenarios and identify their priorities. We may want to run 1000 virtual users, but how to distribute 1000 virtual users across different scripts? It is better to get stats from both business team and the webserver admin team. They can tell the historic usage of the transactions. In a banking scenario, we may see x% of users doing balance inquiry, y% doing deposits, z% doing withdrawals, etc. Usually the business team can provide how many deposits happened in last quarter/month in terms of number of transactions, number of withdrawals, number of balance inquiry, number of utility bill payments etc. From that number, we can arrive at the % of transactions for that activity. If total transactions are 100,000 and deposits are 12500, we can say deposit transaction has 12.5% consumption of total transactions with the server and so on.
We now have to fix the total duration of the run. We usually try to run scenarios at least for 1 hour with all users in peak load. It is better to run for a longer duration to get better statistics. It also ensures the reliability and consistency of the servers and apps. If a branch of a bank works from 9 to 3, better we run our tests for 3 hours (half of it). Again this is our way of planning; different consultants suggest anywhere between 25% to 75% of the total duration of the office hours.
But there is one important aspect on releasing virtual users to hit the server. If I need to run 1000 vu, all 1000 vu will not start at the same time and hit the server. In real life, crowd slowly builds up - both on roads as well as on web. Hence we need to slowly ramp up the user count, rather than doing a big bang. If I need to run 1000 users for 3 hours (180 minutes) at peak load, what is the time I must keep in mind for user ramp up? We usually suggest 80:20 principle. Take 20% of the total peak load duration, and allocate that for ramp-up. Thus to run a scenario for 180 minutes, I may allow 30-35 minutes for users to ramp up and then run 180 minutes at peak. This means, the test will run for 35+180 minutes. Some companies try to include the ramp-up time within total duration and some do not. It does not really affect in a big way.
If 1000 users need to ramp-up in 35 minutes, how to release new users to the load pool? You can either evenly distribute or release in batches. If I need to evenly distribute, I can release 1 user every 2 seconds, and that will give 1000 users at the end of 2000th second. This means, first 1 user will start, after 2 seconds one more user will get added, after another 2 seconds another user will get added and so on. The other way is releasing in batches. Release 30 users every minute. This is purely a subjective decision and it will vary from project to project. In an online examination scenario, all users will ramp-up within 5 minutes, even though the exam duration is 2 or 3 hours.
For free lessons on automation tools, visit us at http://www.openmentor.net.