Tuesday, December 10, 2013

Performance Testing - Load Generators

In today's context, 100 vusers is the minimum load expectation for any web application. If you want to test for just 100 or 200 users, you just need one machine to generate the load. Each vuser will in turn be a thread or process, running in background, in the same machine where your load testing tool is installed. This machine is usually called controller. Each thread/process will occupy 1MB to 20MB memory space, based on script size and data size and each will consume some amount of cpu and disk. When I need to run 2000 vusers, one single machine is not enough to generate load. Here we need to distribute the load generation process itself.

Imagine each vuser consuming 5MB memory space; if we run 1000 vusers, we will require 5GB just for the vusers alone; over and above OS and other software will consume memory. If we have a machine with 4GB memory, we cannot run 1000 vu from that machine; because, that load generating machine itself will crash. Also, when responses for all 1000 vusers are sent to the same machine, that network port of that machine will choke. Every tool has a facility to generate load from different machines. These are called load generators or load agents. From the controller machine, these load generator machines must be accessible via LAN. A small program needs to be installed on all these machines, called remote agent process.


From controller, we must specify how many users are to be executed from each of the load generator machines. If our total user count is 1000, we can specify 300 from load-gen-1, 400 from load-gen-2 and 300 from load-gen-3. The target server for which we do the load test, must be accessible from all load gens; else scripts will fail. Once the load is distributed to all load gens, when the run starts, the tool will send the scripts to the load generators and instruct those to start the vuser thread/processes in those load gens. Hence, the memory and cpu of the controller will not be consumed. Every 5 or 10 seconds, the status will be sent back to the controller from all load gens.


This helps us in 2 ways. First, it helps us to run large number of vusers using multiple regular desktops/laptops, without a need for high end machines, for the sake of generating load. Second, we can run tests on target server, from remote machine other than where the tool license is installed. You may be in New York, USA, target server may be in Ireland and tool load generator may be in Los Angeles, USA. So, this helps us to do a load test with load being generated at a different geography, not from the same place where the server is installed.


For high end load testing tool, visit http://www.floodgates.co.in

For free video lessons on load testing, visit  http://www.openmentor.net.



Monday, November 18, 2013

Performance Testing - Configure vuser count, duration

Executing performance tests is a relatively easier task than doing scripting. Because, the tool is going to do more work and the tester needs to do just a set of configurations. The 2 key configurations are user count and duration. A performance test will not usually have just 1 script running; rather a set of scripts will be executed in parallel as a combined scenario. This is to reflect different sets of users doing different operations on the same server. So it is very important for us to do proper configuration before hitting the start button.

Recollect our first few lessons in load test planning. We identify a set of most frequently used scenarios and identify their priorities. We may want to run 1000 virtual users, but how to distribute 1000 virtual users across different scripts? It is better to get stats from both business team and the webserver admin team. They can tell the historic usage of the transactions. In a banking scenario, we may see x% of users doing balance inquiry, y% doing deposits, z% doing withdrawals, etc. Usually the business team can provide how many deposits happened in last quarter/month in terms of number of transactions, number of withdrawals, number of balance inquiry, number of utility bill payments etc. From that number, we can arrive at the % of transactions for that activity. If total transactions are 100,000 and deposits are 12500, we can say deposit transaction has 12.5% consumption of total transactions with the server and so on.

We now have to fix the total duration of the run. We usually try to run scenarios at least for 1 hour with all users in peak load. It is better to run for a longer duration to get better statistics. It also ensures the reliability and consistency of the servers and apps. If a branch of a bank works from 9 to 3, better we run our tests for 3 hours (half of it). Again this is our way of planning; different consultants suggest anywhere between 25% to 75% of the total duration of the office hours.

But there is one important aspect on releasing virtual users to hit the server. If I need to run 1000 vu, all 1000 vu will not start at the same time and hit the server. In real life, crowd slowly builds up - both on roads as well as on web. Hence we need to slowly ramp up the user count, rather than doing a big bang. If I need to run 1000 users for 3 hours (180 minutes) at peak load, what is the time I must keep in mind for user ramp up? We usually suggest 80:20 principle. Take 20% of the total peak load duration, and allocate that for ramp-up. Thus to run a scenario for 180 minutes, I may allow 30-35 minutes for users to ramp up and then run 180 minutes at peak. This means, the test will run for 35+180 minutes. Some companies try to include the ramp-up time within total duration and some do not. It does not really affect in a big way.

If 1000 users need to ramp-up in 35 minutes, how to release new users to the load pool? You can either evenly distribute or release in batches. If I need to evenly distribute, I can release 1 user every 2 seconds, and that will give 1000 users at the end of 2000th second. This means, first 1 user will start, after 2 seconds one more user will get added, after another 2 seconds another user will get added and so on. The other way is releasing in batches. Release 30 users every minute. This is purely a subjective decision and it will vary from project to project. In an online examination scenario, all users will ramp-up within 5 minutes, even though the exam duration is 2 or 3 hours.

For free lessons on automation tools, visit us at http://www.openmentor.net.

Tuesday, November 5, 2013

Performance Testing - Scripting Part 3

I always hear this from people "scripting is the best part in tools and I want to master scripting". Most people misunderstand scripting vs hand-coding. Hand-coding is remembering commands/syntax and typing the same whereas scripting includes both hand-typing as well as generating code thru wizards. Tools provide a lot of ways to generate code. A machine generated code is more trust-worthy than a hand-typed code and it consumes less time.  But people somehow feel good when they hand-type - known as "my code" syndrome. Let us see when and where we must use scripting efficiently.

Mere record and replay cannot solve 100% of our problems. We need to use scripting as well. Typically it will range from 5-15% overall in any load testing project.

1. Manual correlation. When we need to manually correlate some dynamic data, we must use correlation commands. Nowadays many tools provide a facility to locate a text and add command automatically. If that does not work well, you can do the same manually.

2. Taking a decision. Imagine a scenario in a ticket booking app. A page is displayed with flights data and we need to choose a flight between 8am and 9am. If the screen does not have filters for such time selection, the only way we can achieve is by scripting. You can get the whole response text in a variable thru correlation and then use string manipulation commands to select a flight with given criteria. Here we treat the html response as pure text data and locate the data we need.

3. Skipping actions/iterations due to data issues. If we need to skip subsequent steps, when we do not get proper data from parameter csv files or null data inputs or no data from server response, we can use this technique. Before any form is populated with data, check the validity of data using if conditions and then send the request.

4. Repeat a portion of a script. Many tools provide iterations for folders/containers/actions/blocks of code. If you want to repeat an entire block, you can use that built-in feature. If you need a set of lines within a block to repeat, you can write your own loops.

5. Custom logging. If you need to log some text or data in the way you need, you can use file handling open-write-close methods to achieve the same.

For free lessons on automation tools, visit us at http://www.openmentor.net.




Tuesday, October 22, 2013

Performance Testing - Scripting Part 2

When people move from functional test automation to performance test automation, there is always a question on how to handle dynamic data. All functional automation tools provide data driven test, where you feed data in a csv file or spreadsheet. But how to get or extract data that is sent by server? Most of the functional tools provide GetRunTimeValues from UI. This will dump the UI data in an array, from that we can extract the data we want. But in load test, when 1000s of vusers are running, we do not have UI at all. How do we handle dynamic data?

Look at this sequence of events. 

Client sends request REQ1. This is to get the list of purchase orders (PO).
Server sends response RES1. This has a lot of purchase order IDs.
Client must choose the first purchase order from response RES1 and pass that in request REQ2 with a few modified values. 

You do not know what PO ID will come from server. That is purely dynamic data. Unless you send one of those IDs coming in RES1, the REQ2 will fail. All these are happening in background without you seeing UI. To achieve this, you need to do correlation.

Correlation means - extract some details from a response, and pass that as part of subsequent requests. 

All load testing tools use the same principle to handle correlation. The response is a pure html text. See this example. List of POs are coming like this.

<table>
<tr class="po"><td>POID</td><td>CUSTOMERID</td><td>PODATE</td></tr>
<tr class="po"><td>9235</td><td>Navy Corp</td><td>01-10-2013</td></tr>
<tr class="po"><td>9845</td><td>Blue Minds</td><td>10-05-2013</td></tr>
<tr class="po"><td>9876</td><td>Blue Fields</td><td>06-03-2013</td></tr>
<tr class="po"><td>9989</td><td>Red Grove</td><td>07-04-2013</td></tr>
</table>

When I send REQ1, after sometime, the table data may be different in response. Hence every time, it is necessary that we get the first row from the response. How do we extract the row 1? There are 5 rows in the table and the first row is the header. 

Here is the simple trick. "Chase the data". I want to pick up 9235. Locate what appears to its left and what appears to its right. The text <tr class="po"><td> appears to its left and </td> appears to its right. But there are 5 such rows, where I can see the same left and right text. But this 9235 text appears in the 2nd position or ordinal. So if we tell the load test script, to locate the text with <tr class="po"><td> as left boundary and </td> as right boundary, it will give me an array [POID, 9235, 9845, 9876, 9989]. In this array, my text appears in ordinal 2. Look at this command.

locate_dynamic_data(dynavar1, LB=<tr class="po"><td>, RB=</td>, Ordinal=2);

where, dynavar1 is the variable to which it will load the value, LB is left boundary and RB is right boundary. When load test tool sees this instruction/command, it will scan the response, locate the text as per your LB, RB and Ordinal, and will give the extracted data in variable dynavar1. Once it is in variable, you can pass that to subsequent requests.

Everywhere, you see dynamic data, you need to correlate. You need to ensure that the correlation command gets proper data from the response. If response itself is not received or the text you look for is not present, subsequent requests may fail. You need to handle that part in scripting.

In java framework, in web apps, there is usually a dynamic text jessionid; same way, in asp.net apps, you will see a dynamic text __VIEWSTATE__, __VIEWINFO__. Unless you handle these framework related dynamic text, load test script will not work. But the load test tools are intelligent. Whenever the text during recording and text during replay are changing, they give a warning to the tester, that these changing areas are potential correlation areas. Hence pay attention when tools warn you on dynamic data.

For free lessons on automation tools, visit us at http://www.openmentor.net.

Friday, September 20, 2013

Performance Testing - Scripting - Part 1

Web applications will typically follow n-tier architecture, i.e. presentation layer, business logic layer, data access layer, data layer, external interfaces etc. But a website will mostly static pages, may be with javscript. The entry and exit to websites is unpredictable, as every page is a potential entry and/or exit, they are all independent. I a web application (webapp), the sequence of tasks one does is very important, to complete a business transaction. Hence there is a difference between load testing a website and load testing a webapp. As we saw in the previous post, the very first step is to identify the most frequently used scenarios and the steps for those scenarios. From that point, we need to focus on scripts.

Every load test tool provides recording feature. This is the most economical, easy and powerful way of scripting. First the tester must record the sequence of operations, as one user does. This means, start recording, start doing the business transaction steps on the application. The tool will record all the requests and responses, going to server and coming back from the server, using an internal proxy. Ideally every load test tool sniffs the request/response between the browser and the server. The request may be a get or post or ajax request. The tool will identify the url, the query string parameters for the request and will identify the server response data and redirection pages. Soon after recording, if we replay the script, the tool must be in a position to send the request to the server.

Mere recording and replaying will not solve many business rules. Some apps will require unique data, some need random data, some need advanced correlation and that is what we must do as the next step with the recorded script. Also, it is better we organize the requests under folders/containers. This will help towards easy maintenance of the scripts. For example, if our sequence of operations look like the following,
  • Go to home page
  • Fill userid and password, login
  • Navigate to items page
  • Load items list grid
  • Select an item and edit
  • Enter new details and save
  • Refresh items grid
  • Logout
It is better to organize the same like this.

  • Initialize
    • Initialization
    • Go to home page
    • Fill userid and password, login
  • Items Grid Load
    • Navigate to items page
    • Load items list grid
  • Modify Item
    • Select an item and edit
    • Enter new details and save
  • Items Grid Refresh
    • Refresh items grid
  • Finalize
    • Logout
Once the requests/steps are organized, the next step is to provide proper data to the script. We cannot use the same data we gave during recording. There are 2 parts to the data  - static data, dynamic data. Static data is different data supplied/typed by the user on the screen, dynamic data is the data that server sends back to the screen. Providing static data is called parameterization and handling dynamic data is called correlation. 

Static data can be provided thru variables. Tools provide variable manager module. We can create a variable and load the variable with different values, at run time. Some data that we may often use, can be obtained from the system itself, such as current date and time, user name, machine name, random number, random text, etc. 

Some data will be application specific. This data will have dependencies on other application data as well. For this, we usually create a file to have such data and modify the hard coded data values in the script, to use the data variables. The variables will be mapped to the file and specific column in that file. This is very similar to data driven tests in functional test automation.

For example if item creation page requires item code, item name, UOM, price as user supplied data, create a csv file like this.

ITEMCODE,ITEMNAME,UOM,PRICE
1001,Maxx Soap,NOS,18.90
1002,Vixor Biscuits,PCK,12.60
..
1099,Brainee Rice,KGS,14.50

Create a variable in the tool (say myItemData) and map that to this file.
In the script, replace the hard coded values with
myItemData.ITEMCODE, myItemData.ITEMNAME, myItemData.UOM,
myItemData.PRICE. During run time, the tool will read the data from this file and supply the values from the respective columns to the right variables. Usually, the tools will read lines sequentially and send that to the script. This can be changed as well. We will see these in the next post. Stay tuned.

For free lessons on automation tools, visit us at http://www.openmentor.net.



 

Tuesday, September 10, 2013

Performance Testing - Protocol Selection, Script Recording



If you know one load testing tool, it is easy to learn another. You need to be clear in the load testing fundamentals, in order to learn a new tool and to master it. We will help you step by step in achieving that mastery over load testing tools. Let us take the first part of the tools, protocol selection.

Protocol is nothing but the format in which the client and server communicate with each other. Eg., http, https, ftp, smtp, wap, tcpip, rdp, etc. Every application developer must first freeze the protocol and architecture, as changing these at a later stage will mess up a whole lot of things. But for a load testing person, all it needs is to understand what goes as request and what comes as response. Ultimately every thing will go as a stream of bytes; but to operate on the requests for parameterization etc., the load tester must know the parts of the requests and parts of the response. 

It is difficult to manage too many protocols and learning them at the bits and bytes level. Instead, if the tool can parse the request and response, and display the same in a clear user interface, most of the problems for the load tester are solved. The tester must refer to the design documents as well as consult with the development team to identify and choose the right protocols. A few applications may use multiple protocols to carry out a specific transaction; in that case, the tester must select all those protocols before recording the script.

The catch here is, that the load testing tools will charge you based on the protocol modules you want to buy! There will be a base license cost and there will be an add-on cost for every protocol module. You may be thinking that it is a simple web application, but that app may use Google Web Toolkit (GWT) or Flex related formats; without having those protocols as part of the tool, you cannot get a clean script. Hence one needs to be careful why purchasing the tool and add-on licenses.

Once you freeze the protocol, the you need to focus on scripts. We must create load testing script by recording a typical business scenario, as though one user is doing it on the application. For example, a user logs into an HR application, submits a travel request and logs out. The actual load test will send 100s of such requests (simulating 100s of users). The key point here is what scenarios we must record as part of scripting?

Identify the most frequently used user scenarios. All said and done, you and I go to google and do search 80% of the times.  Some other person may go to google stocks page to get stock quotes. So, the user priorities vary. Though google has 1000s of pages, only a small set of pages are more frequently used, by  most of the users. In the same manner, in your application, identify which are the most frequent ones and tabulate the same. How many users will execute those scripts, how long the users will run etc., we will deal with those in subsequent sections.

If you take any ecom site, the most frequently used scenarios are:

  1. Go to home page, type a keyword, do a search, load search results, view an item from the results
  2. Go to home page, type a keyword, do a search, load search results, view an item from the results, add to cart
  3. Go to home page, type a keyword, do a search, load search results, view an item from the results, add to cart, provide payment details, buy
In the above 3 scenarios, many activities are common, but for every one customer actually buying, 100s of other customers, just "surf" and "window-shop" without adding to cart. Though they do not contribute to revenue, they occupy your system and network space. After attracting a customer to the site, thru marketing, it is very hard to see the customer abandoning the shopping cart without a buy! Usually these things happen due to slow response. So, buckle up, and make it faster!


For free lessons on automation tools, visit us at http://www.openmentor.net.

Monday, August 26, 2013

Performance Testing - Basics

Squeeze the app before release. If the app withstands that, it is fit for release. But how to squeeze? How will we determine the number of users, data volume etc.? Let us take this step by step and learn. Performance tests are usually postponed until customers feel the pinch. The primary reason is the cost of the tools and the capability to use the tools. If one wants to earn millions of dollars thru a hosted app, a good, proven and simple way is to increase users and reduce price. If one does this, the business volume will grow - but it brings the performance issues as well along with that.

Most of the tools use the same concept of emulating the requests from the client side of the application. This has to be done programmatically. When one is able to generate requests, processing response is a relatively easier task. When you choose the tools, it is better you look for the must-be-in features and then for nice-if features. 

The must-be-in features are listed below.

  1. Select protocols (HTTP, FTP, SMTP etc.)
  2. Record the user sequence and generate script
  3. Parameterize the script to supply a variety of data
  4. Process dynamic data sent by server side (correlation)
  5. Configure user count, iterations and pacing between iterations
  6. Configure user ramp-up
  7. Process secondary requests
  8. Configure network speed, browser types
  9. Check for specific patterns in the response
  10. Execute multiple scripts in parallel
  11. Measure hits, throughput, response time for every page 
  12. Log important details and server response data for troubleshooting
  13. Provide custom coding facility to add additional logic
The nice-if features are listed below.
  1. Configure performance counters for OS, webserver, app server, database server. This way, you can get all results under one single tool
  2. Automatically correlate standard dynamic texts based on java or .net framework. This will reduce scripting time
  3. Provide a visual UI to script and build logic
  4. Generate data as needed - sequential, random and unique data
  5. Provide a flexible licensing model - permanent as well as pay-per-use will be great
  6. Integrate withe profiling tools to pinpoint issues at code level
When one evaluates a performance testing tool, one must do a simple proof of concept on the above features, to see how effectively the tool handles these features. No need to say, that the tool must be simple to use.


Here are a few simple terms you need to be clear - at least academically. There are so many different definitions for the phrases given below, but we try to take the mostly accepted definitions from various project groups.

Load Testing - Test the app for an expected number of users. Usually customers know their current user base (example - total number of account holders in a bank). The number of online users will be usually between 5% and 10% of customer base. But an online user may be just a logged in user, doing no transaction with server. Our interest is always on the concurrent users. Concurrent users are usually between 5% and 10% of online users. So if 100,000 is the total customer base, then 10% of it, 10,000 will be online users and 10% of that, 1000 will be concurrent users.

Stress Testing - Overload the system by x%. That x% may be 10% more than normal load or even 300% more than the normal load. But usually load tests happen for a longer duration and stress tests happen for a shorter duration as spikes, with abnormally more users. Stress is like a flash flood. 

Scalability/Capacity Testing -  See the level at which the system crashes. Keep increasing users and you will see a lot of failures and eventually crash. Some companies use the term stress testing itself to include the capacity testing as well.

Volume Testing - keep increasing the data size for requests as well as process requests when the application database has 100s of millions of records. This usually checks the robustness and speed of the data retrieval and processing.

Endurance/Availability Tests - test the system for a very long period of time. Let the users keep sending requests 24 by 7 may be even for a week or month. See if system consistently behaves over a period of time.

For free lessons on automation tools, visit us at http://www.openmentor.net.

Monday, August 12, 2013

Non-functional Testing

You are never alone. The environment around you changes every second. Your behavior in a changing environment - is it consistent or is it unpredictable? This is true for software applications also. Testing the behavior for a given input and expecting a definitive output is termed as functionality testing. But the same input, same product, but a different environment or an external factor, need not give a consistent output. Testing that is a non-functional testing. 

There are a variety of non-functional testing topics, that we are going to discuss in detail. The key areas that need to be addressed are given below.
  1. Performance Testing
  2. Compatibility Testing
  3. Interoperability Testing
  4. Security Testing
  5. Recovery Testing
  6. Usability Testing
  7. Localization Testing
  8. Globalization Testing
  9. Adhoc Testing
Let us first take performance testing. In today's world, internet is everything and it is everywhere. That connects PCs, servers, mobiles and people. Hence it is inseparable from our life. Internet is as important as electricity today. This means, more people use the application. Let us take google, facebook, amazon, youtube, msdn etc. All these sites/portals are used by millions of people. When more users use the system, company gets more visibility and hence more money. But the crowd comes with problem also.

When more people use, the system slows down or crashes. How many people are ready to tolerate the home page loading beyond 5-7 seconds. If your product does not load or start quickly, there are enough competitor products that users can try out. Hence the speed is the single factor that wins the hearts of users, right at the first shot. 

Remember the trinity - Users, Data and Time. If any one of these factors is increased, it uses more system resources such as cpu, memory, disk and network; and that causes slowness. But how will I test my app with 1000s of users, hitting the app at the same time? Can we assemble the real users in beach and give them laptops or tablets and coordinate to test the app? No way. Hence, instead of relying on real users, we go for virtual users. Performance testing is now the key factor for an app to be released. 

Performance testing has different sub-types.
  1. Load testing
  2. Stress testing
  3. Scalability or Capacity testing
  4. Volume Testing
  5. Endurance or Availability testing
To carry out these testing, we need to have proper tools. There are priced tools such as HP Load Runner, IBM Rational Performance Tester, Borland SilkPerformer and there are free open source tools such as Apache JMeter etc. In the coming sections, we will see the concepts of load testing and how to use these tools etc.


For free lessons on automation tools, visit us at http://www.openmentor.net.








 

Tuesday, July 23, 2013

Test Automation - Exception Handling

Test automation is all about a tester generating test scripts, i.e. code. So, all problems of what a developer would face, a tester would also face. Hence test script must also be fully tested by the tester. But, there are 4 major factors that can affect the test script and those 4 are unpredictable. Hence, the test script must gracefully handle those areas. This is called exception handling or known as recovery scenarios.

Hurdle 1 - Unknown pop-ups. When the script executes, step by step, application reacts to those steps, and there is a specific expectation out of each step. For example, if a user enters a valid account number in account number field and press tab, the account name must be auto populated in name field; when script executes, due to a wrong data fed into account number field, if the application shows a pop-up stating "Hey, the account number does not exist; please check the data", that pop-up is a blocker. Without closing that, nothing can be done and script cannot proceed. This is just one example; similarly in many occasions, due to data issues or application functional issues, OS related issues, unwanted popups will crop up. The tester must ensure all such pop-ups are addressed when script runs. Ideal solution is close the pop-up, and continue or go to next test case.

Hurdle 2 - Objects or pages not found or disabled for input. This happens due to a functional bug in the application. Take an example. When user goes to account balance screen, the account number must be enabled for data entry. Occasionally if that field does not appear or disabled, the script will go ahead try to enter the account number and the type event will fail. It is not possible to check the enabled or displayed property for every field to be ON, before every step. A field not found error may happen, if the display of the field, goes outside the display resolution of the monitor. If this kind of errors happen, it is better to relogin to the app and move to the next test case.

Hurdle 3 - Application crashes. This is usually due to some critical bug in the application. The test script will not find the application itself to execute the next step. In this case, it is ideal to restart the app and start the next test case. If we try to start the same test case, it may crash again and it may go in a loop.

Hurdle 4 - Script error. This is due to a wrong logic by the tester in the test script. This may be due to accessing wrong array locations, divide by zero, trying to open a non-existing file etc. If this kind of errors happen, it is better to relogin to the app and move to the next test case.

Tools provide a variety of mechanisms to handle the exceptions, either thru coding or by configurations or by both. Every exception is a learning. It is very difficult to identify all exceptions and handle those before script runs. It  is an evolving process. So, as and when new unforeseen exceptions do happen, add them to the exception handler library.


For free lessons on automation tools, visit us at http://www.openmentor.net.





Monday, July 15, 2013

Test Automation - Scripting Essentials

Record and replay alone cannot solve test automation problems; also record/replay may not be always possible. The test script may need some kind of decision making at run time; script may need some intelligence in handling situations; the test may have to alter its path based on some values coming on screen at run time. The simple solution for all kinds of such issues is - Scripting. Every tool provides some scripting language such as VBScript, Java, JavaScript, C#, Ruby etc. So, if we mix record/replay and scripting, we increase the power of the tool by thousand times.

Rule 1: Never build application logic in the scripting language. Example. You recorded how to book one-way ticket for 1 person; for 2 people, if app needs the price to be multiplied by 2, do not build that multiplication logic in your script. Because, whenever the app logic changes, script needs to be modified and retested. You may make some bugs in coding in the scripting language, while modifying that logic. Always, see what is the input, manually determine what must be the expected output, feed that output as checkpoint values, go on. Your brain is the best tool, better than any automation tool.

Rule 2: Comment your script well. Maintenance of script is very important. Hence make sure another tester can easily understand your script. 

Rule 3: Unit test your test script. Remember, developers make mistakes in their code. When you program, you will also do mistakes in your code. Being a tester, nothing guarantees that your test scripts will work without testing.

Rule 4: Ensure your test functions, work for a variety of parameters. A single function may feed 1000s of data thru data driven test. Hence test your script with multiple data.

Rule 5: Avoid nested if conditions. At the most you can do 2 levels. This itself will consume more time for you to unit test your test scripts.

Rule 6: Avoid nested loops. Nested loops may not be actually required in 95% of the cases. So be judicial when using nested loops.

Rule 7: Have a traceability matrix for your test scripts. You need to have a spreadsheet that documents the input params, output params, file details, function details, caller details for every test script/function. Else, when it grows to 1000 test scripts, changing one script may affect another, if you are not aware of the dependencies.

Rule 8: Always put all test scripts in a batch and run. This will eliminate the base state problems.
 
Rule 9: Always make another tester (other than the author of the test script) to run the test batch. This will eliminate human related issues and documentation issues.


Rule 10: Always run the test batch from another machine (bot used for building the test scripts). This will eliminate system related issues, hardcoding of drive/folder names etc. and documentation issues.


For free lessons on automation tools, visit us at http://www.openmentor.net.







 







Sunday, June 23, 2013

Test Automation - File/DB Checkpoints

Most of the apps take the input from the user thru UI, do some logic, store the data in a file or database, finally show a success or failure message on the UI. Checking the message alone is not good enough; what if the data was truncated or wrongly formatted or wrongly stored or lost? This is why you need to check the data in files and database. Testers can see the results of a test case on the UI or in a file or in database tables. Checking the text on UI is very simple as one can immediately see it in front of the eyes. But Checking the results in a file takes some more time and a few extra steps.

Let us take the file storage first. First of all, the tester must know the folder and filename in which the app stores the results. The file may be a flat file or a csv or a tsv or an XML file. Hence the tester must also know the format of the file. First, the tester must manually enter data thru UI and save the record and see how the file stores the data. Imagine the app stores the employee record like this (one record per one line): emp_code,emp_name,emp_dob,designation,salary,branch,emp_doj. This is a csv format. Here, we do not know the length of the line; it can vary based on the length of the name or other fields. But all I know is that if I give some data in the UI, the same must appear in the file, in the given order. Hence, to check whether the data is present in the file or not, we must know how to open a file, read the records from file, compare the data with expected results and then close the file. All tools provide a fiel interface thru file system object (VBScript) or file input stream (Java). Using these commands, one must read record after record, compare with expected results and close the file.

If the app stores data in XML file, it is a little bit easier. Because, every scripting language provides simple commands to check whether a node (<tag> </tag>) is present, how many child nodes are present and the data embedded within a specific tag. These commands use XML object to store the file data. Tools like HP QTP provide a very good UI also to view the whole XML structure like a tree and we need not even write code to check data in an XML file.

When it comes to checking records in a database, you need to know the tables and their structure. Soon after every add, edit or delete transaction thru the app UI, we must do a select query from the respective table, extract the data and compare whether the same is correct or not. The important point in writing the query is to narrow down the data that is very very specific to the test. Do not grab a lot of records that are not required for the test; for example, if I cancel a ticket with reservation number XB45890, I need to just select the record for that reservation number only; so that I get only the record I want to compare. Try to select records based on some unique key values, rather than a generic where clause.

When you use file or database checkpoints, make sure you close the file or database connection, soon after the checkpoint.


For free lessons on automation tools, visit us at http://www.openmentor.net.

Friday, May 31, 2013

Test Automation - Checkpoints

A test must end with a pass or fail status. Humans execute test steps, see the actual results in front of their eyes, compare that with expected results and declare the pass/fail status. Using record/replay/data, test automation tool executes the test steps. But the tool does not know expected results. We need to tell the tool about what is expected and the tool can then declare the results. This is implemented using checkpoint or verification point feature, in every tool. 

Tester feeds data on UI objects such as text box, combo box, radio button etc. When the form is submitted, usually thru a button click to save or update or delete, the application does the operation and posts a message back on the screen. The result is usually a text message such as "Your ticket is successfully booked. Thanks for choosing us.". The tester will then state that the booking test case has passed. If this message does not come, it is treated as failed. Usually the messages will be displayed in specific places of the screen or status bar or the screen may refresh with the data that is entered with an auto-generated id etc. 

As part of automating test cases, every test case will have steps and will have checkpoints. In the checkpoint, we need to feed what is the object to be looked for results, which property of the object reflects the result and what is the expected result. e.g. look in the status bar, look at the text, the text must be "The transaction is completed successfully.". In some cases, it may not be text, instead we will be looking for the price to be updated in the price text box. When you go to a travel portal, usually the number of passengers will be set to 1 and the price will be for 1 person, say $110. When you change the number of passengers to 2, the price must change to $220. So, based on one event, the expected outcome changes in another field. In this case, we must place a checkpoint on the price text box, for its text property with an expected value $220. 

In some cases, only when I check the "I accept the terms" check button, the continue button will be enabled. Till then it will be disabled. In this, we need to place a check point for the continue button, for the enabled property with the expected outcome as true/on. In an email inbox, when I delete a mail, the number of mails shown at the top will be reduced by one. Like this, the screen will show the actual results in specific places. We must find those places and put appropriate checkpoints. Some organizations will use standardization of their UI elements, such as the width and height of all push buttons must be 90 and 30 pixels respectively; in such case we must place checkpoints for the width and height properties of the buttons.

Every test case will have the standard sequence as follows.


  • Test steps to establish pre-requisite state of the application
  • Test steps for the test case
  • Checkpoint-1
  • More test steps for the test case
  • Checkpoint-2
  • Some more test steps for the test case
  • Checkpoint-3
  • Test steps to clean-up

When the checkpoint passes, the tool will declare the results in green color and when it fails, the result will be painted in red color. Ideally for every expected result in my test case, I must have a checkpoint command in the test script.

For free lessons on automation tools, visit us at http://www.openmentor.net.



Monday, May 6, 2013

Test Automation - Data Driven Test

You need more data for testing. I would rather re-phrase this to state you need more varieties of data rather than more data itself. Each data set must help the tester to drive the application thru a different logic or path, so that we can do better testing. Take the example of booking airline ticket. It is just 1 screen that takes data and users feed these fields to get their ticket booked. Each ticket may be a different combination. A human tester gets bored when repeatedly seeing and operating on the same screen. But this has to be done any way. 

Here comes the data driven test (DDT) feature of the test automation tools. Test steps are same, but data is different, for each transaction I must use a different data set - If this is your case, use data driven test. Every testing tool comes up with a data sheet or data pool. Here you need to provide data in a spreadsheet like file. For booking a ticket you need from place, to place, one way or round trip, date of journey, return date, number of persons etc. Though there are lot more details for ticket booking, let us limit to these data at this point. 

The first thing for DDT is to create a file that has these data. Usually you can use a csv file or xls file. The first line is usually the title for the data. See the example below.


FROM-PLACE,TO-PLACE,TRIP-TYPE,JOURNEYDT,RETURNDT,NUMPERSONS
Los Angeles,New York,TWOWAY,10-JUN-2013,14-JUN-2013,2

Los Angeles,Denver,ONEWAY,10-JUN-2013,10-JUN-2013,1
Los Angeles,London,TWOWAY,10-JUN-2013,18-AUG-2013,5
...

We need to prepare each line of data that represent some equivalent partition set to test our application. 

Once data is created, we need to record a script using the tool, that does one ticket booking, by entering the data in the above fields. While recording we will give some data and that will reflect in the script. e.g.

BookingScreen.Clear
BookingScreen.FromAirPort.Set "Los Angeles"
BookingScreen.ToAirPort.Set "Denver"
BookingScreen.TripType.Select "TwoWay"
BookingScreen.DateOfJourney.Set "10-May-2013"
BookingScreen.DateOfReturn.Set "12-May-2013"
BookingScreen.NumberOfPersons.Set 1
BookingScreen.Submit

The syntax given above is generic and not specific to any tool. To ensure that the script uses the data from our file, we may have to modify the script manually or use the data driver wizard of the tool. We need to set the value for each UI field data entry, to point to a column in the data sheet. Also, we must loop thru all rows in the data sheet, so that the script executes for all records and books multiple tickets. It may  look like the following.

DataSheet = "C:\\mydata.csv"

For currRow = 1 to DataSheet.RowCount
   DataSheet.CurrentRow = currRow
   BookingScreen.Clear
   BookingScreen.FromAirPort.Set DataSheet.getColumnvalue("FROM-PLACE")
   BookingScreen.ToAirPort.Set DataSheet.getColumnvalue("TO-PLACE")
   BookingScreen.TripType.Select DataSheet.getColumnvalue("TRIP-TYPE")
   BookingScreen.DateOfJourney.Set DataSheet.getColumnvalue("JOURNEYDT")
   BookingScreen.DateOfReturn.Set DataSheet.getColumnvalue("RETURNDT")
   BookingScreen.NumberOfPersons.SetDataSheet.getColumnvalue("NUMPERSONS")
   BookingScreen.Submit
Next



The simple code, will get all records from the csv file and repeat feeding the screen with different data sets from the csv file. This way, a 10 line code can enter 10000 records with little human interference.

















Friday, April 19, 2013

Test Automation - Object Identification

There are about 30 students in a class; you need to find out whether the person you want to talk to, is there or not; assume you have not seen him. You first go and ask "Is John here? I have a parcel to deliver". So, you use the first name to identify a person. If there are 2 Johns in the same class, then you will ask "John Bosco". So you use last name also as additional qualifier to uniquely identify that person, among 30 other students. As a rare case, if both are John Bosco, what will you do? You may say "John Bosco, son of  David P Bosco". The goal is to deliver the parcel to the right person. Testing tools also work on very similar way to carry out click or type actions, on the exact objects on the application UI.

If one has a very good grasp on object identification concepts, 80% of the automation problems are already solved. An object means an item on the user interface, on which you either click or type or you read it and infer some information; e.g. first name text box, state combo box, save button, etc. Recording feature in every tool, internally does this object identification. Most of the tools use a way of storing the details of objects in a separate file. In HP QTP it is called Object Repository, in IBM Rational Functional Tester it is called Object Map, in Borland SilkTest it is called frames.inc, and so on. 

Every object has a type, aka class. The most common types are text box, check button, radio button, combo box, push button, menu bar, tool bar, label, link, image, grid (rows, columns, cell), scroll bar, status bar, progress bar. These are the typical items we see in every application. But the exact way, every technology calls these classes will be different; for example, native Windows C++ will say Push Button, Java will say JButton, Turbo C++ may say TButton, for the same push button you see on screen. This specific text is called nativeclass.

They also have a name associated with them. This is not mandatory, but preferable. The objects may display a name on screen for the user to read, but it may internally use another name. This is like the nick name we use for people. Tools understand the internal name of the objects, rather than the display name of the objects. Ideally developers of the screens must use same, meaningful names for display as well as internal purposes. 

The best way to identify an object is to use the type/class of the object and its internal name. e.g. button/OK, combobox/States, textbox/ZIP code etc. Sometimes, the same display name may appear 2 times on the UI. Consider a screen that takes your personal and office addresses; both will have street/city/state text boxes. So the display name is same; the humans understand this well; but for the tools to understand two objects with same display name, it is better to use the internal name given by developers. 

When we need additional details to uniquely identify an object, there are other properties such as location (x, y coordinates), index of the object (nth object from top of the screen) are usually used. But position based identification is a risky option; this may fail when screen is updated with new objects or a few objects are removed, that may affect the relative position of this field on the screen. 

To understand the objects and their properties, tools provide object spy or object inspector utilities. Before you start automating, spend a few hours to see the class, name and other properties of every object on every single screen. If you see objects with confusing properties or non-unique properties, negotiate with development team to have consistent naming of objects.

Remember: Proper naming of objects reduces the time to automate significantly. This simple process discipline is required by the dev team to make test automation is easy.

For a live demo of this, please click on this url.

Friday, April 5, 2013

Test Automation - Record and Replay

"I need a tester who tests my app very fast!", yells the test manager. This is the same tone we hear from every other test manager. When there are tight deadlines, 100s of customers using the product, one small change needs 3000 test cases to be executed. High time you start automating your test cases. Automate or lose your people and customers - it is your call now.

Take 1 minute to understand how automation works. Take electric fan. Before fan was designed, we used our hands and a fan made of palm leaves; our hand will swing that fan and you can feel the air breeze on your face. To feel that breeze, there is a physical movement. The electric fan automates that. It has the long wings, acting as the palm leaf fan; it rotates and that physical motion pushes air towards you. Well, how can we achieve that in software testing? Instead of me, a tool doing my physical actions such as click, type, drag and drop etc.? 

All tools use this record and replay feature. A microphone records sound waves and speaker reproduces the same; camera records light waves and screen reproduces the same. Same way, inside the test automation tool, there is a recording mechanism. This recording mechanism uses win32/64 API events. When you click on a button on some screen, a win api is generated and is broadcast across the system. That api will tell a single click, left button on mouse has happened on screen X on button with label as "Submit". Our room is full of radio waves, TV channel waves, mobile phone waves; yet we cannot feel or see or hear those? Why, because our eyes and ears are not tuned to detect those. The moment we have a radio and tune the frequency, we hear. The radio has a mechanism to detect and convert those waves to a humanly audible form. Same way, the testing tool has a mechanism to "listen" to those apis and tools produce script. 

The script lines will have the details such as Window name, screen object name, event, event details. e.g.

Notepad - myfile.txt, Edit-Find Menu, Click, left button/single
Find, Find text:, Type, Hello

The script is readable and understandable. When this script is replayed, the tool produces instructions to the operating system, via the reverse win apis, to perform the same action. The application will receive a click event on the object and it will respond to the same. If these apis are not exposed or suppressed, none of the tools will work.

Record and replay are the powerful features of every tool. Do not underestimate the same. The level 1 automation can be quickly achieved by recording every test case and replaying them one after the other. Record once and you can reply any number of times, without spending your time and energy. If you have 100 test cases, record all of those and replay those. While the tool is replaying, manually oversee the screens for any issues. This itself will save at least 30-40% of your testing time.

To see the record and replay (also known as capture-playback), click here to see the demo video.






Monday, February 25, 2013

Test Automation Basics



One of the standard issues every test manager faces, is to have a proof of what is tested and what is not. If I have 300- 400 test cases, I am in full control. But if I have 5000 odd test cases how do I know that the 3617th test case is executed or not ? I trust my testers. Imagine I need to test 200 test cases in 3 different browsers, it multiplies my effort. I can not have 3 testers to do it in three different browsers, nevertheless it must be done. When I need some testing to be done on some critical builds, I do not know, on that day my tester gets illness. He does not turn up to work! My client is waiting to get the status. Oh, what a mess!

4 to 6 weeks from the day of first test execution cycle, my testers get bored of the test cases. Their eyes are not as sharp as before. They feel tired. But they want salary revision! One of my testers claim that he has done 80 test cases since morning I am more than 100% sure that he could not have done that much. How can I be assured of some one did it or did not do it?

The one best answer is automation. Instead of manually executing the test cases, do the testing using a tool. This can solve all the above mentioned problems. Tool never gets tired, tools never get bored, they do not ask salary revisions, they are fast, they do not apply leave and they are consistent!

Before doing any automation test, we must carry out a small proof of concept (POC) or a feasibility study of automation tool on our application. This may take from 4 hours to 8 hours. But this can solve a lot of issues that will be faced  by the team at a later time.

When it comes to test automation, a tester becomes a developer of automated test scripts. This means, the tester generates code using the tool to test the application. There are a variety of tools available in the market. QTP by HP, SilkTest by Borland, Rational Functional Tester by IBM, TestComplete by SmartBear, Selenium, Ranorex etc. are a few to name. Some tools work only on browser based web apps, some work on only rich/thick clients app and some work on both. But all these tools use the UI of the application to run tests. The human tester uses the UI to carry out functional tests and these tools do the same. Instead of human doing a click, the tool clicks on a button, instead of human typing, the tool mimics the key strokes.  

The following are the most common features that almost all tools share.
  1. Recording (Capture test steps)
  2. Replaying  (Playback test steps)
  3. Object Identification (knowing the forms and fields attributes on screen)
  4. Data Driven Test (use same steps with different data sets)
  5. Check points (Verification points, compare the actual results to expected results)
  6. Scripting (use a programming language to add intelligence to test scripts)
  7. File and Database handling (if results are stored on disk)
  8. Exception handling (recovery path when test script itself fails)
You need to think of test automation if you say Yes to one or more of the following points.
  1. The number of test cases for my product is large and I have many regression rounds
  2. My application is a product and not just a 4 months project
  3. My product needs to be tested on multiple environments for compatibility
  4. My product is being used by 1000s of customers and we cannot have a single regression issue
  5. I test my product very frequently, almost everyday
  6. My team costs me more and more and project bleeds for profitability
In the coming sections, we will discuss more about each of the automation features, in detail.

For free automation courses, visit http://www.openmentor.net.
 

Tuesday, February 12, 2013

Compatibility Testing

What shapes a person - nature or nurture? All said and done, the environment in which a person is nurtured, has a tremendous impact on the person's nature, IQ and all other aspects. Software is no exception to this. The environment in which it runs, determines the behavior of the software. Let us see the list of items that affect a software from the environment angle. We can broadly divide this into client side compatibility and server side compatibility.

Let us first take the client side compatibility. This is where the end customer sees your product. The Operating System. This is the ultimate controller of a physical computer. Assume that we develop a product in Windows XP, compile it in Windows XP and test it in Windows XP. There is a very high probability that it will work fine. When we install the same product in Windows Vista or Windows 7 or Windows 8, what is the guarantee it will work the same way it did in Win XP? Absolutely no guarantee. But, we cannot predict the customer's operating environment. So, it is our responsibility to test the product in different versions of the same OS family. Usually, we need to test the product in the latest, latest - 1, latest - 2 versions of the OS.

If you carefully look at the operating system releases, you will notice service packs (SP), and HotFixes (HF). These are patches applied to the OS itself. If I had tested the product in OS + SP1, and if there is a new SP2 release, in order to ensure the quality of the product in that environment, we need to test our product again in OS + SP2 combination. 

With internet penetration to every nook and corner, browser war is always on. The top companies compete to gain browser share. Hence browser is the primary interface to end users. If a product is tested in Internet Explorer (IE), if customers prefer Firefox (FF) or Chrome, then it is mandatory to test the product across different browsers. The way each browser handles html/xml/json etc., the rendering may differ slightly. But when the alignment or rendering gets affected, the end user experience differs. 

Now, to put a bigger bomb, what about testing the product in WinXP and IE 8, Win7 and IE 9, Win8 and Firefox 12 combination? The OS-Browser pair along with OS-Browser versions, the validation matrix will be huge. Do we need to worry about this large matrix? If your end customer uses one such combination, and if that person spends 1000s of dollars on your eCom portal, will you not take effort to make it work? The more combinations you test, the more market share you can gain, as it covers a large customer base.

With lot of different PCs, laptops, models coming up with different screen size and resolutions, consumer facing application such as  online shopping portal etc., we need to test with different screen resolutions. A simple vertical or horizontal scroll bar may irritate the end user. That one small dislike may make the user leave your product.

The compatibility tests are large. You execute the same functional test cases, but on different environments. This takes more time and needs more people. But now, we have a lot of tools on cloud to automatically setup such combinations of environments. This will help you to test faster. You can see browserstack.com as an example. There are many providers like this, it is up to you to make the choice, that suits you better.

For video lessons, please visit www.openmentor.net.