Search This Blog

Pages

Tuesday, July 5, 2011

Automation Framework - Types


A test automation framework is a set of assumptions, concepts, and practices that provide support for automated software testing.  Framework must be easy to expand, maintain, application independent and perpetuate.  This will allow us to implement that extra effort just once, and execute it for every call to any component function.

Framework should handle all the details of ensuring we have the correct window, verifying the element of interest is in the proper state, doing something with that element, and logging the success or failure of the entire activity.

Linear Framework

This is the basic framework everybody can follow where we cannot see any reusability.  Each individual prepares separate script and it can be used only where it exists.  It cannot reduce re-work

Modularity Framework

The test script modularity framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test (AUT). These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case. This one is the simplest to grasp and master. It's a well-known programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application. This insulates the application from modifications in the component and provides modularity in the application design. The test script modularity framework applies this principle of abstraction or encapsulation in order to improve the maintainability and scalability of automated test suites.  

Data Driven Framework

Data Driven scripts are those application data specific scripts captured or manually coded in the automation tool’s proprietary language and then modified to accommodate variable data. Variables will be used for key application input fields and program selections allowing the script to drive the application with external data supplied by the calling routine from the data table during test run.
These data driven scripts often still contain the hard coded and sometimes very fragile recognition strings for the window components they navigate. A test automation framework relying on data driven scripts is definitely the easiest and quickest to implement when we have the technical staff to maintain it.  We need to update test data each time business changes and need to update test script when we have changes in the application user interface.

Keyword Driven Framework

Sometimes this is also called "table driven" test automation. It is typically an application-independent automation framework designed to process our tests. These tests are developed as data tables using a keyword vocabulary that is independent of the test automation tool used to execute them. This keyword vocabulary should also be suitable for manual testing
The data table records contain the keywords that describe the actions we want to perform. They also provide any additional data needed as input to the application, and where appropriate, the benchmark information we use to verify the state of our components and the application in general.

Another advantage of the keyword driven approach is that testers can develop tests without a functioning application as long as preliminary requirements or designs can be determined. All the tester needs is a fairly reliable definition of what the interface and functional flow is expected to be like. From this they can write most, if not all, of the data table test records.
Fortunately, this heavy, initial investment is mostly a one-shot deal. Once in place, keyword driven automation is arguably the easiest of the data driven frameworks to maintain and perpetuate providing the greatest potential for long-term success.

Hybrid Framework

The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects

Thursday, June 30, 2011

Test Automation Tools - Load/Performance Testing

Load and performance test tools:

Proprietary/ Licensed Tools:

1. Loadrunner - HP formerly Mercury
2. Rational Performance Tool - IBM
3. Silk Performer - Micro focus formerly Borland
4. Visual Studio Team System (VSTS) - Microsoft
5. Forecast Web - Facilita
6. E-load - Empirix

Open source Tools:

1. OpenSTA
2. NeoLoad
3. WebLoad
4. QALoad
5. JMeter

Test Automation Tools - Funtional Testing

Various test automation tools are available in the market few are Open source tools and other few are proprietary tools.

Licensed/Proprietary Tools:

1. Quick Test Professional - HP formerly Mercury
2. Rational Functional Tester (RFT) - IBM formerly Rational
3. Silk Test - Microfocus formerly Borland
4. Winrunner - HP formerly Mercury
5. Janova Basic / Professional / Enterprise - Janova
6. Test Complete - SmartBear
7. Test Partner - Microfocus
8. SOATest - Parasoft
9. AutoIt

Open Source Tools:

1. Selenium
2. Watir
3. WatiN
4. Canoo Web Test
5. QAT
6. Sahi
7. Tomato
8. WET
9. Soapui
10. Dogtail

Wednesday, June 29, 2011

Test Automation and Feasibility Study

When an application needs to be tested with multiple data (or) when we need to navigate through same functionality multiple times (or) application has regression phase, we can go for test automation.  Purpose of test automation is to save testing efforts and for consistent quality.   Every above case we can not implement test automation as there are other parameters which drives test automation including project budget, number of human resources, number of regression cycles to run and contract duration.

Sometimes it would not be possible to automate complete functionality of an application.  Regression test cases/scenarios decides need of test automation.  It is difficult to perform test automation, if application functionality changes rapidly (or) application has objects which are not static and moves with in page.

If application has qualified for test automation then we need to do the feasibility study to choose best automation tool which meets your testing requirements.  Following are the points we need to consider during this phase:

1. What are the technologies the test automation tool supports? (Add-Ins)

2. What are the languages the test automation tool understands?

3. How fast we can learn the test automation tool?

4. How fast we can develop coding using this test automation tool?

5. How fast test automation tool executes the lines of code?

6.  Cost and Reputation of the test automation tool?

Apply above questionnaire on different tools available and et answers from the above questions to decide the test automation tool you use in your testing project.

Thursday, June 23, 2011

Entry and Exit Criteria in Testing

Entrance Criteria :
There is no standard for what should be included in the entrance criteria, but there are a number of items which should be considered for inclusion

* All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
* System Test plans must be signed off by Business Analyst and Test Controller
* All human resources must be assigned and in place
* All test hardware and environments must be in place, and free for System test use
* The Acceptance Criteria must be prepared

Exit Criteria
* All Test Cases have been executed and passed
* All Defects have been fixed and Closed (or) Deferred
* System test results and metrics have been documented
* All critical/high severity defects are resolved and successfully retested
* 90% of Business process is working
* All critical Business processes are working
* The software runs on all the product’s supported hardware and software configurations
*  Final Publications Draft of product publications, including the Quick Start Guide, has been reviewed, tested in QA, and approved by the Core Team

Some Testing Definitions

Test
An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluations is made of some aspect of the system/component

Test Bed
An environment containing the hardware, instrumentation, simulation, software tools and other support elements needed to conduct a test

Test Case
A set of test inputs, execution conditions and expected results developed for a particular objective such as to exercise a particular program, path or to verify compliance with a specific requirement

Test Coverage
The degree or extent to which a given component must meet in order to pass a given test

Test Driver/Test Harness
A software module used to invoke a module under test and often, to provide test inputs, control and monitor execution, and report test results

Test Case Generator
A software tool that accepts as input source code, test criteria, specification, or data structure definitions; uses these inputs to generate test input data and sometimes determines expected results

Test Case Specification/ Test Description
A document that specifies the test inputs, execution conditions and predicted results for an item to be tested

Test Report
A document that describes the conduct and results of the testing carried out for a system or component

Acceptance Criteria
The criteria that a system/component must satisfy in order to be accepted by a user. customer or other authorized entity

Wednesday, June 22, 2011

Roles and Responsibilities in Testing

The number of roles simultaneously existing would depend on the size and scope of the testing activity.  In a small organization in which there are a few testing professionals, one individual may assume the roles of both Test Analyst and Tester on a testing project. Similarly one person may assume the responsibilities of the Testing Manager and Test Team Leader if there is only one testing task or project to manage. A single person may also assume different roles on different projects.  For example, a Test Analyst on one project may perform the role of Test Team Leader on another.

Test Manager:
This role should be created right from the beginning of the software development project and certainly not later than when the requirements gathering phase is over.  The Test Manager has the responsibility to administer the organizational aspects of the testing process on a day-to-day basis and is responsible for ensuring that the individual testing projects produce the required products to the required standards of quality and within the specified constraints of time, resources and budget.  He is also responsible for liaison with the development teams to ensure that they follow the Unit Testing and Independent Testing approach documented within the process.  Test Manager reports to a senior management or director within the organization such as the Quality Assurance Manager or Information Technology Director.  In large organizations and particularly those following a formal project management process, the Test Manager may also report to a Testing Program Board, which is responsible for the overall direction of the project management of the testing program.

Test Team Leader:
In small testing projects the Test Manager himself may play this role.  In large projects this role needs to be created separately.  This role should be in place after the test plan is ready and test environment is established.  Test Team Leader is given the responsibility to carry out the day-to-day testing activities.  His or Her responsibilities assigning tasks to one or more testers/test analysts, monitoring their progress against agreed upon plans, setting up and maintaining the testing project and ensuring the generation of the test artifacts.  One or more Test Analysts and Testers report to the Test Team Leader who in turn reports to the Test Manager.  During Acceptance Testing the Test Team Leader is responsible for coordinating with the User Representative and Operations Representative to obtain one or more users to perform Acceptance Testing, which is the responsibility of the customer.

Test Analyst:
The Test Analyst is responsible for the design and implementation of one or more Test Scripts/Test Cases, which will be used to accomplish the testing of the AUT.  If the scope of testing activity is small, the Test Lead might end up doing many of tasks described here. The Test Analyst may also be called upon to assist the Test Team Leader in the generation of the Test Specification Document.  During design of test case, the Test Analyst will need to analyze the Requirements Specification for the AUT to identify specific requirements that must be tested.  He will need to prioritize the Test Cases to reflect the importance of the feature being validated and the risk of the feature failing during normal use of the AUT.  On completion of testing project, the Test Analyst is responsible for the back-up and archival of all testing documentation and materials.  He is also responsible for completing a Test Summary Report briefly describing the key points of the testing project.

Tester:
The Tester is primarily responsible for the execution of the Test Scripts/Test Cases created by the Test Analyst and for the interpretation and documentation of the results of the Test Case execution.  During the test execution, the Tester is responsible for filling in the Test Results Record Forms to document the observed result of execution each test case and for co-signing the bottom for each form with the Independent Test Observer to confirm that the Test Case was followed correctly and the observed result recorded accurately. The Tester is also responsible for the recovery of the test environment in the event of failure of the system.

Independent Test Observer
The Independent Test Observer is responsible for providing independent verification that correct procedures are followed during the testing of the AUT.  The Independent Test Observer is responsible for ensuring that the Tester executes the tests according to the instructions provided in the test cases.  In organizations in which there is a formal Quality Assurance Group, Independent Test Observer may be drawn from the ranks of the Quality Assurance Representative.  In small organizations where there is no formal Quality Assurance group, Independent Test Observer may be a staff member drawn from another group or project within the organization.  The key criterion in selecting a Independent Test Observer is that she or he must be impartial and objective