Monday 28 January 2013

An overview on Test Automation



Basically Software Testing is 2 types

1) Manual Testing
2) Test Automation

Manual Testing:
Testing software manually is called Manual Testing. we can test all accepts of software manually. 
Below Testing Types can be tested manually

Test Types:
a) Functional Testing
b) Regression Testing
c) GUI Testing
d) Usability Testing
e) Security Testing
f) Compatibility Testing
g) Recovery Testing
h) Reliability testing
Etc…
 
Drawbacks of Manual Testing

(i)Time consuming.
(ii) More resources required.
(iii)Human Errors
(iv)Repetition of the Task is not much
(v)Tiredness
(vi)Simultaneous auctions are not possible (Parallel)  

Test Automation:

Testing Software using any Automation tools is called Test Automation

Advantages of Test Automation:

a) Fast: Tools are faster in execution than human users

b) Reliable: Tools are reliable in complex calculations and tasks

c) Reusable: we can reuse Automated Tests at any number of times

d) Repeatable: we can repeat same operations for required number of times

e) Programmable:we can use flow control statements for appalying logic

f) Comprehensive: we can execute test batches without human interaction also 
Test Automation can be used in below areas of Testing:

a)  Functional & Regression Testing

b) Load/Stress/Performance Testing

c) Security Testing

d) Unit Testing
Drawbacks of Automation Testing
1)It is expensive
2)We cannot automate all areas.
3)Lack of expertisation.
4)It has some limitations (It cannot test every thing) 

Which Software Testing should be automated?

Tests that need to be execute of every build of the application (Sanity Testing)
 
Tests that use multiple data values (Retesting / Data Drives Testing) 
Tests that required data from application intimates (G.U.I. Attributes) Load and Stress Testing

Which Software Testing should not be automated?

 
Usability Testing One time testing
 
Quick look Tests or A.S.A.P (As soon as possible) Testing Ad-hoc testing / Random Testing
 
Customers requirement are frequently changing. 
------------------------------------------- Types of Test tool:
-------------------

    Business:
-----------------
    a) Vendor tools
 
    Ex: HP- WinRunner, LoadRunner, QTP, QC
        IBM-Rational Robot, ,RFT, RPT, QA Director
        Borland-SilkTest, Silk Performer etc..
 
    b) Open Source Tools:
        Ex: Selenium, Jmeter, QAWebLoad, Bugzilla etc...

    c) In-house tools:

    Technical:
-----------------
    a) Functional & Regression Test Tools:
        Ex:WinRunner, QTP, Rational Robot, ,RFT, SilkTest,Selenium etc..
 
    b) Performence/load/stress test tools

        Ex: LoadRunner, RPT, Silk Performer,Jmeter, QAWebLoad etc...
 
    c) Test Management Tools:
        Ex: QC, QA Director Etc...
 
    d) Defect Management tools

    e) Unit Test tools (Ex: JUnit)

Testing Process in QTP


7 Stages of QTP Testing Process

1) Planning

o Analyzing the AUT
o Automation Test Plan Generation
o Automation Framework Implementation
o Generating/Selecting Test cases for Automation
o Collecting Test Data
o QTP Tool Settings Configuration

2) Generating Tests

o Recording
o Keyword driven methodology
o Descriptive Programming

3) Enhancing Tests

o Inserting Checkpoints
o Inserting Output values
o Adding Comments
o Synchronization
o Parameterization
o Inserting Flow Control Statements
o Calling User defined functions and/or Reusable Actions
o Generating Steps though Step Generator
o Inserting Transaction Points
o Regular Expressions

4) Debugging Tests

o Debug Commands & Break Points
o Step by step execution
o Watching Variables
o Changing values of variables

5) Running Tests

o Normal Execution
o Batch Execution
o Through AOM Scripting
o Tests Running through framework
o Scheduled Execution

6) Analyzing Results

o QTP Result window
o Defining our own Results
o Exporting Results
o Deleting Results

7) Reporting Defects

o Manual Defect Reporting
o Tool based Defect Reporting
o Working with Quality Center
----------------------------------------------------------- 
Types of Statements in QTP Test / Test Script

i) Declarations (Variables and constants)
Dim x, y, z
Const City, Price
ii) Object calls

Ex1: Dialog("Login").WinEdit("Agent Name:").Set "gcreddy"
Ex2: Browser("Google").Page("Google").Link("Gmail").Click

iii) Comments
iv) Flow Control Statements (Conditional & Loop)
Ex:)  If Total=Tickets*Price Then
    Msgbox "Test Passed"
Else
    Msgbox "Test Failed"
End If

v) Function / Action calls
Ex: Call Login("gcreddy","mercury")
vi) Utility Statements
Ex1: SystemUtil.Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a.exe"
**It launche the Application
Ex2:
SystemUtil.Run "C:\Program Files\Internet Explorer\IEXPLORE.EXE","http://www.icicibank.com/"

vii) VB script other statements
Examples:
Option Explicit
Wait (14)

Recording and Running



a) Test Recording Process

It is a Process of Recording user operations on AUT (Application Under Test). During Recording QTP Creates steps in Keyword view, and generates them in a script in the Expert view. Simultaneously it adds Objects information into Object Repository.

b) Running /Execution Process
During Running QTP reads statements one by one and gets Object Information from the Object Repository, based on that Information performs operations on AUT.

c) Recording Modes
QTP has 3 Recording Modes to generate Tests / Test Scripts
i)  Normal Recording

It records User Mouse and Keyboard operations on AUT with respect to objects, but unable to record continuous mouse operations like Digital Signatures, graphs, paints etc.

During recording QTP generates VbScript statements in Test Pane, Simultaneously it stores objects information into object repository.

Navigation: Automation>Record
                          Or
           Select Record option on automation toolbar
                          Or
             Use short cut key (F3)

Steps for preparing a Test (through Recording):

1.Put AUT in base state
2.Select Record Option
3.It shows Record and Run Settings, Select type of Environment (Windows or Web)
4.Select Record Option

(It shows two Options :
1.Record and Run Test on any open window based applications
2.Record and Run only on)

If we select first option it records on any opened application on Desktop.
If we select Second option, it asks for the path of the AUT, After Providing the path it records only on that particular application.)

5.Click OK
6.Perform actions on AUT
7.Stop recording.
8.Save the Test

ii) Analog Recording:

It records the exact mouse and keyboard operations. We can use this mode for recording continuous mouse operations. It is not useful for recording normal operations why because it does not generate steps for each operation, generates total user actions in a Track File. The Track file is not editable.

Navigation:

1.Keep tool under recording mode
2.Automation >Analog Recording
OR
Use Short cut Key (Shift + ALT+F3)

Steps for preparing a Test (through Analog Recording):

1.Launch AUT (or we can launch AUT through QTP)
2.Select Record option
3.Automation>Analog Recording
4.Analog Recording Settings Dialog box opens

(In this Dialog box two options available.

1.Record relative to the screen
2.Record relative to the following window)

(If we select first option QTP records User operations with respect to Desktop co-ordinates.
If we select Second option, we have to show the window (AUT), after showing the Window it records with respect to that window co-ordinates.)

5.Select any one option in the dialog box and click Start Analog record.
6.It records User actions
7.Stop Recording


iii) Low Level Recording

It records some operations on Non-supported environments apart from Normal operations.

This mode records at the object level and records all run time objects as window or winobject Test objects.

Use Low Level Recording for recording in an environment not recognized by QTP.

Navigation:
1.Keep tool under recording mode
2.Automation >Low Level Recording

Steps for preparing a Test (through Low Level Recording):

1)Launch AUT (or we can launch AUT through QTP)
2)Select Record option
3)Automation> Low Level Recording
4)Perform options on AUT
5)Stop Recording
6)Save the Test

d) Disadvantages of Recording
• It occupies a lot of memory space(due to duplicate objects), So QTP    performance will be reduced

• No Centralized Maintenance mechanism , So Modifications are very difficult

• User may not have command on the Recorded script, So locating errors is difficult

•  Recorded scripts are QTP internal files, they may corrupt.

e) Advantages of Recording/ Where Applicable
• It is used for Analyzing the AUT in the initial stage to find out weather the QTP tool is Recognizing all of our Application Objects or not

• It is easy to create Tests / Test Scripts

•It is used for frequently changing UI (User Interface)

• It takes less time to create Tests


DataProvider in TestNG


There are many functions provided by TestNG and you can use them in different ways one of them I will mention in this blog.

@DataProvider
A DataProvider provide data to the function which depends on it. And whenever we define a DataProvider it should always return a double object array “Object[][]”. The function which calls dataprovider will be executed based on no. of array returned from the DataProvider. For ex.
?
1
2
3
4
5
6
7
8
9
@Test(dataProvider="data")
public void printMethod(String s){
  System.out.println(s);
 }
@DataProvider(name="data")
public Object[][] dataProviderTest(){
return new Object[][]{{"Test Data 1"},{"Test Data 2"},{"Test Data 3"}};
}

The Output will be:
Test Data 1
Test Data 2
Test Data 3

As you can see that the Test method “printMethod ” gets called 3 times depending upon the data that was provided by the DataProvider.The DataProvider can be used for getting data from some file or database according to test requirements.Following I will mention two ways to use DataProvider:For ex.You need to get data from a file and print each line to the console. For doing that you had written some File Processing API that will return a List of the data read from file.You can iterate on the List at the DataProvider level as well as at the test method level. Both I am mentioning below.
1.
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@DataProvider(name = "data")
 public Object[][] init1() {
  List list = fileObject.getData();
   Object[][] result=new Object[list.size()][];
   int i=0;
   for(String s:list){
    result[i]=new Object[]{new String(s)};
    i++;
   }
 return result;
 }
@Test(dataProvider="data")
public void runTest1(String s){
  System.out.println("Data "+s);
 }
In this Implementation we are iterating over the List at the DataProvider level and storing it to another Object[][] result and returning the result.This implementation will call the test method depending upon the List size.

2.
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
@DataProvider(name = "data")
public Object[][] init() {
  List list = fileObject.getData();
    
 return new Object[][]{{list}};
 }
@Test(dataProvider="data")
public void runTest(List list){
  for(String s:list){
   System.out.println(“Data” + s);
  }
 }
In this Implementation we are returning the List itself to the test method and method will iterate over the List data. Both implementation can be used based upon test requirements.
Output of both the implementation shown above remains the same. The only thing that changes is the way you return the data and Iterate.