Monday, February 25, 2013

Test Automation Basics



One of the standard issues every test manager faces, is to have a proof of what is tested and what is not. If I have 300- 400 test cases, I am in full control. But if I have 5000 odd test cases how do I know that the 3617th test case is executed or not ? I trust my testers. Imagine I need to test 200 test cases in 3 different browsers, it multiplies my effort. I can not have 3 testers to do it in three different browsers, nevertheless it must be done. When I need some testing to be done on some critical builds, I do not know, on that day my tester gets illness. He does not turn up to work! My client is waiting to get the status. Oh, what a mess!

4 to 6 weeks from the day of first test execution cycle, my testers get bored of the test cases. Their eyes are not as sharp as before. They feel tired. But they want salary revision! One of my testers claim that he has done 80 test cases since morning I am more than 100% sure that he could not have done that much. How can I be assured of some one did it or did not do it?

The one best answer is automation. Instead of manually executing the test cases, do the testing using a tool. This can solve all the above mentioned problems. Tool never gets tired, tools never get bored, they do not ask salary revisions, they are fast, they do not apply leave and they are consistent!

Before doing any automation test, we must carry out a small proof of concept (POC) or a feasibility study of automation tool on our application. This may take from 4 hours to 8 hours. But this can solve a lot of issues that will be faced  by the team at a later time.

When it comes to test automation, a tester becomes a developer of automated test scripts. This means, the tester generates code using the tool to test the application. There are a variety of tools available in the market. QTP by HP, SilkTest by Borland, Rational Functional Tester by IBM, TestComplete by SmartBear, Selenium, Ranorex etc. are a few to name. Some tools work only on browser based web apps, some work on only rich/thick clients app and some work on both. But all these tools use the UI of the application to run tests. The human tester uses the UI to carry out functional tests and these tools do the same. Instead of human doing a click, the tool clicks on a button, instead of human typing, the tool mimics the key strokes.  

The following are the most common features that almost all tools share.
  1. Recording (Capture test steps)
  2. Replaying  (Playback test steps)
  3. Object Identification (knowing the forms and fields attributes on screen)
  4. Data Driven Test (use same steps with different data sets)
  5. Check points (Verification points, compare the actual results to expected results)
  6. Scripting (use a programming language to add intelligence to test scripts)
  7. File and Database handling (if results are stored on disk)
  8. Exception handling (recovery path when test script itself fails)
You need to think of test automation if you say Yes to one or more of the following points.
  1. The number of test cases for my product is large and I have many regression rounds
  2. My application is a product and not just a 4 months project
  3. My product needs to be tested on multiple environments for compatibility
  4. My product is being used by 1000s of customers and we cannot have a single regression issue
  5. I test my product very frequently, almost everyday
  6. My team costs me more and more and project bleeds for profitability
In the coming sections, we will discuss more about each of the automation features, in detail.

For free automation courses, visit http://www.openmentor.net.
 

Tuesday, February 12, 2013

Compatibility Testing

What shapes a person - nature or nurture? All said and done, the environment in which a person is nurtured, has a tremendous impact on the person's nature, IQ and all other aspects. Software is no exception to this. The environment in which it runs, determines the behavior of the software. Let us see the list of items that affect a software from the environment angle. We can broadly divide this into client side compatibility and server side compatibility.

Let us first take the client side compatibility. This is where the end customer sees your product. The Operating System. This is the ultimate controller of a physical computer. Assume that we develop a product in Windows XP, compile it in Windows XP and test it in Windows XP. There is a very high probability that it will work fine. When we install the same product in Windows Vista or Windows 7 or Windows 8, what is the guarantee it will work the same way it did in Win XP? Absolutely no guarantee. But, we cannot predict the customer's operating environment. So, it is our responsibility to test the product in different versions of the same OS family. Usually, we need to test the product in the latest, latest - 1, latest - 2 versions of the OS.

If you carefully look at the operating system releases, you will notice service packs (SP), and HotFixes (HF). These are patches applied to the OS itself. If I had tested the product in OS + SP1, and if there is a new SP2 release, in order to ensure the quality of the product in that environment, we need to test our product again in OS + SP2 combination. 

With internet penetration to every nook and corner, browser war is always on. The top companies compete to gain browser share. Hence browser is the primary interface to end users. If a product is tested in Internet Explorer (IE), if customers prefer Firefox (FF) or Chrome, then it is mandatory to test the product across different browsers. The way each browser handles html/xml/json etc., the rendering may differ slightly. But when the alignment or rendering gets affected, the end user experience differs. 

Now, to put a bigger bomb, what about testing the product in WinXP and IE 8, Win7 and IE 9, Win8 and Firefox 12 combination? The OS-Browser pair along with OS-Browser versions, the validation matrix will be huge. Do we need to worry about this large matrix? If your end customer uses one such combination, and if that person spends 1000s of dollars on your eCom portal, will you not take effort to make it work? The more combinations you test, the more market share you can gain, as it covers a large customer base.

With lot of different PCs, laptops, models coming up with different screen size and resolutions, consumer facing application such as  online shopping portal etc., we need to test with different screen resolutions. A simple vertical or horizontal scroll bar may irritate the end user. That one small dislike may make the user leave your product.

The compatibility tests are large. You execute the same functional test cases, but on different environments. This takes more time and needs more people. But now, we have a lot of tools on cloud to automatically setup such combinations of environments. This will help you to test faster. You can see browserstack.com as an example. There are many providers like this, it is up to you to make the choice, that suits you better.

For video lessons, please visit www.openmentor.net.


 

License To Crack - A blog-book on Software Testing

Hello,

Welcome to the world of software testing. This page contains a series of links that will help you understand software testing, step by step. Along with this text content, you can use www.openmentor.net videos to learn software testing. 

Enjoy learning.

Functional Testing

  1. License To Crack - Introduction to Testing
  2. SDLC - First 3 Critical Phases 
  3. SDLC - Coding, Testing, Implementation, Maintenance
  4. Five angles to look at everything 
  5. Top 100 Test Scenarios for Online Shopping Apps 
  6. Writing Test Cases 
  7. Top 100 Test Scenarios – Inventory Management
  8. Boundary Value Analysis (BVA) 
  9. Equivalence Partitions 
  10. The CRUD Approach 
  11. Review before you execute tests 
  12. Test environment (test bed) setup
  13. The first hour of testing 
  14. Test Execution - How can we do faster? 
  15. Tester's friend - The BUG 
  16. Bug Life Cycle 
  17. Regression Testing 
  18. How to measure and analyze the testing efficiency?  
  19. What does one need to get job? 
  20. Compatibility Testing 
Test Automation

  1. Test Automation Basics 
  2. Record and Replay 
  3. Object Identification 
  4. Data Driven Test 
  5. Checkpoints 
  6. File/DB Checkpoints 
  7. Scripting Essentials 
  8. Exception Handling