Search This Blog

Pages

Tuesday, July 5, 2011

Automation Framework - Types


A test automation framework is a set of assumptions, concepts, and practices that provide support for automated software testing.  Framework must be easy to expand, maintain, application independent and perpetuate.  This will allow us to implement that extra effort just once, and execute it for every call to any component function.

Framework should handle all the details of ensuring we have the correct window, verifying the element of interest is in the proper state, doing something with that element, and logging the success or failure of the entire activity.

Linear Framework

This is the basic framework everybody can follow where we cannot see any reusability.  Each individual prepares separate script and it can be used only where it exists.  It cannot reduce re-work

Modularity Framework

The test script modularity framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test (AUT). These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case. This one is the simplest to grasp and master. It's a well-known programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application. This insulates the application from modifications in the component and provides modularity in the application design. The test script modularity framework applies this principle of abstraction or encapsulation in order to improve the maintainability and scalability of automated test suites.  

Data Driven Framework

Data Driven scripts are those application data specific scripts captured or manually coded in the automation tool’s proprietary language and then modified to accommodate variable data. Variables will be used for key application input fields and program selections allowing the script to drive the application with external data supplied by the calling routine from the data table during test run.
These data driven scripts often still contain the hard coded and sometimes very fragile recognition strings for the window components they navigate. A test automation framework relying on data driven scripts is definitely the easiest and quickest to implement when we have the technical staff to maintain it.  We need to update test data each time business changes and need to update test script when we have changes in the application user interface.

Keyword Driven Framework

Sometimes this is also called "table driven" test automation. It is typically an application-independent automation framework designed to process our tests. These tests are developed as data tables using a keyword vocabulary that is independent of the test automation tool used to execute them. This keyword vocabulary should also be suitable for manual testing
The data table records contain the keywords that describe the actions we want to perform. They also provide any additional data needed as input to the application, and where appropriate, the benchmark information we use to verify the state of our components and the application in general.

Another advantage of the keyword driven approach is that testers can develop tests without a functioning application as long as preliminary requirements or designs can be determined. All the tester needs is a fairly reliable definition of what the interface and functional flow is expected to be like. From this they can write most, if not all, of the data table test records.
Fortunately, this heavy, initial investment is mostly a one-shot deal. Once in place, keyword driven automation is arguably the easiest of the data driven frameworks to maintain and perpetuate providing the greatest potential for long-term success.

Hybrid Framework

The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects

Thursday, June 30, 2011

Test Automation Tools - Load/Performance Testing

Load and performance test tools:

Proprietary/ Licensed Tools:

1. Loadrunner - HP formerly Mercury
2. Rational Performance Tool - IBM
3. Silk Performer - Micro focus formerly Borland
4. Visual Studio Team System (VSTS) - Microsoft
5. Forecast Web - Facilita
6. E-load - Empirix

Open source Tools:

1. OpenSTA
2. NeoLoad
3. WebLoad
4. QALoad
5. JMeter

Test Automation Tools - Funtional Testing

Various test automation tools are available in the market few are Open source tools and other few are proprietary tools.

Licensed/Proprietary Tools:

1. Quick Test Professional - HP formerly Mercury
2. Rational Functional Tester (RFT) - IBM formerly Rational
3. Silk Test - Microfocus formerly Borland
4. Winrunner - HP formerly Mercury
5. Janova Basic / Professional / Enterprise - Janova
6. Test Complete - SmartBear
7. Test Partner - Microfocus
8. SOATest - Parasoft
9. AutoIt

Open Source Tools:

1. Selenium
2. Watir
3. WatiN
4. Canoo Web Test
5. QAT
6. Sahi
7. Tomato
8. WET
9. Soapui
10. Dogtail

Wednesday, June 29, 2011

Test Automation and Feasibility Study

When an application needs to be tested with multiple data (or) when we need to navigate through same functionality multiple times (or) application has regression phase, we can go for test automation.  Purpose of test automation is to save testing efforts and for consistent quality.   Every above case we can not implement test automation as there are other parameters which drives test automation including project budget, number of human resources, number of regression cycles to run and contract duration.

Sometimes it would not be possible to automate complete functionality of an application.  Regression test cases/scenarios decides need of test automation.  It is difficult to perform test automation, if application functionality changes rapidly (or) application has objects which are not static and moves with in page.

If application has qualified for test automation then we need to do the feasibility study to choose best automation tool which meets your testing requirements.  Following are the points we need to consider during this phase:

1. What are the technologies the test automation tool supports? (Add-Ins)

2. What are the languages the test automation tool understands?

3. How fast we can learn the test automation tool?

4. How fast we can develop coding using this test automation tool?

5. How fast test automation tool executes the lines of code?

6.  Cost and Reputation of the test automation tool?

Apply above questionnaire on different tools available and et answers from the above questions to decide the test automation tool you use in your testing project.

Thursday, June 23, 2011

Entry and Exit Criteria in Testing

Entrance Criteria :
There is no standard for what should be included in the entrance criteria, but there are a number of items which should be considered for inclusion

* All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
* System Test plans must be signed off by Business Analyst and Test Controller
* All human resources must be assigned and in place
* All test hardware and environments must be in place, and free for System test use
* The Acceptance Criteria must be prepared

Exit Criteria
* All Test Cases have been executed and passed
* All Defects have been fixed and Closed (or) Deferred
* System test results and metrics have been documented
* All critical/high severity defects are resolved and successfully retested
* 90% of Business process is working
* All critical Business processes are working
* The software runs on all the product’s supported hardware and software configurations
*  Final Publications Draft of product publications, including the Quick Start Guide, has been reviewed, tested in QA, and approved by the Core Team

Some Testing Definitions

Test
An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluations is made of some aspect of the system/component

Test Bed
An environment containing the hardware, instrumentation, simulation, software tools and other support elements needed to conduct a test

Test Case
A set of test inputs, execution conditions and expected results developed for a particular objective such as to exercise a particular program, path or to verify compliance with a specific requirement

Test Coverage
The degree or extent to which a given component must meet in order to pass a given test

Test Driver/Test Harness
A software module used to invoke a module under test and often, to provide test inputs, control and monitor execution, and report test results

Test Case Generator
A software tool that accepts as input source code, test criteria, specification, or data structure definitions; uses these inputs to generate test input data and sometimes determines expected results

Test Case Specification/ Test Description
A document that specifies the test inputs, execution conditions and predicted results for an item to be tested

Test Report
A document that describes the conduct and results of the testing carried out for a system or component

Acceptance Criteria
The criteria that a system/component must satisfy in order to be accepted by a user. customer or other authorized entity

Wednesday, June 22, 2011

Roles and Responsibilities in Testing

The number of roles simultaneously existing would depend on the size and scope of the testing activity.  In a small organization in which there are a few testing professionals, one individual may assume the roles of both Test Analyst and Tester on a testing project. Similarly one person may assume the responsibilities of the Testing Manager and Test Team Leader if there is only one testing task or project to manage. A single person may also assume different roles on different projects.  For example, a Test Analyst on one project may perform the role of Test Team Leader on another.

Test Manager:
This role should be created right from the beginning of the software development project and certainly not later than when the requirements gathering phase is over.  The Test Manager has the responsibility to administer the organizational aspects of the testing process on a day-to-day basis and is responsible for ensuring that the individual testing projects produce the required products to the required standards of quality and within the specified constraints of time, resources and budget.  He is also responsible for liaison with the development teams to ensure that they follow the Unit Testing and Independent Testing approach documented within the process.  Test Manager reports to a senior management or director within the organization such as the Quality Assurance Manager or Information Technology Director.  In large organizations and particularly those following a formal project management process, the Test Manager may also report to a Testing Program Board, which is responsible for the overall direction of the project management of the testing program.

Test Team Leader:
In small testing projects the Test Manager himself may play this role.  In large projects this role needs to be created separately.  This role should be in place after the test plan is ready and test environment is established.  Test Team Leader is given the responsibility to carry out the day-to-day testing activities.  His or Her responsibilities assigning tasks to one or more testers/test analysts, monitoring their progress against agreed upon plans, setting up and maintaining the testing project and ensuring the generation of the test artifacts.  One or more Test Analysts and Testers report to the Test Team Leader who in turn reports to the Test Manager.  During Acceptance Testing the Test Team Leader is responsible for coordinating with the User Representative and Operations Representative to obtain one or more users to perform Acceptance Testing, which is the responsibility of the customer.

Test Analyst:
The Test Analyst is responsible for the design and implementation of one or more Test Scripts/Test Cases, which will be used to accomplish the testing of the AUT.  If the scope of testing activity is small, the Test Lead might end up doing many of tasks described here. The Test Analyst may also be called upon to assist the Test Team Leader in the generation of the Test Specification Document.  During design of test case, the Test Analyst will need to analyze the Requirements Specification for the AUT to identify specific requirements that must be tested.  He will need to prioritize the Test Cases to reflect the importance of the feature being validated and the risk of the feature failing during normal use of the AUT.  On completion of testing project, the Test Analyst is responsible for the back-up and archival of all testing documentation and materials.  He is also responsible for completing a Test Summary Report briefly describing the key points of the testing project.

Tester:
The Tester is primarily responsible for the execution of the Test Scripts/Test Cases created by the Test Analyst and for the interpretation and documentation of the results of the Test Case execution.  During the test execution, the Tester is responsible for filling in the Test Results Record Forms to document the observed result of execution each test case and for co-signing the bottom for each form with the Independent Test Observer to confirm that the Test Case was followed correctly and the observed result recorded accurately. The Tester is also responsible for the recovery of the test environment in the event of failure of the system.

Independent Test Observer
The Independent Test Observer is responsible for providing independent verification that correct procedures are followed during the testing of the AUT.  The Independent Test Observer is responsible for ensuring that the Tester executes the tests according to the instructions provided in the test cases.  In organizations in which there is a formal Quality Assurance Group, Independent Test Observer may be drawn from the ranks of the Quality Assurance Representative.  In small organizations where there is no formal Quality Assurance group, Independent Test Observer may be a staff member drawn from another group or project within the organization.  The key criterion in selecting a Independent Test Observer is that she or he must be impartial and objective

Test Case

IEEE Standard 610 (1990) defines test case as follows:
“(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

“(2) (IEEE Std 829-1983) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.”

Tuesday, June 21, 2011

Types of Software Testing

Static Testing
Static testing refers to testing something that’s not running. It is examining and reviewing it. The specification is a document and not an executing program, so it’s considered as static. It’s also something that was created using written or graphical documents or a combination of both including test plan, test strategy, test scenario, test case....etc

Dynamic Testing
Techniques used are determined by type of testing that must be conducted.

· Structural (usually called "white box") testing
· Functional ("black box") testing

Structural testing or White box testing
Structural tests verify the structure of the software itself and require complete access to the source code. This is known as ‘white box’ testing because you see into the internal workings of the code.

Functional or Black Box Testing
Functional tests examine the behavior of software as evidenced by its outputs without reference to internal functions. Hence it is also called ‘black box’ testing. If the program consistently provides the desired features with acceptable performance, then specific source code features are irrelevant. It's a pragmatic and down-to-earth assessment of software

Classes of Reviews

Informal or Peer Review
In this type of review generally a one-to one meeting between the author of a work product and a peer, initiated as a request for input regarding a particular artifact or problem. There is no agenda, and results are not formally reported. These reviews occur as need-based through each phase of a project.

Semiformal or Walkthrough Review
The author of the material being reviewed facilitates this. The participants are led through the material in one of the two formats: the presentation is made without interruptions and comments are made at the end, or comments are made throughout. Possible solutions for uncovered defects are not discussed during the review.
Formal or Inspection Review
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes.
The subject of the inspection is typically a document such as a requirements spec or a test plan and the purpose is to find problems and see what's missing, not to fix anything.  Attendees should prepare for this type of meeting by reading through the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report

Software testing

Software testing is a critical element of software quality assurance and represents the ultimate process to ensure the correctness of the product. The quality product always enhances the customer confidence in using the product thereby increases the business economics. 

In other words, a good quality product means zero defects, which is derived from a better quality process in testing.

To develop a software product or project, user needs and constraints must be determined and explicitly stated. The development process is broadly classified into two:

1. Product development
2. Project development

Product development is done assuming a wide range of customers and their needs. This type of development involves customers from all domains and collecting requirements from many different environments.
Project Development is done by focusing a particular customer's need, gathering data from his environment and bringing out a valid set of information that will help as a pillar to development process.

Testing the product means adding value to it by raising the quality or reliability of the product. Raising the reliability of the product means finding and removing errors. Hence one should not test a product to show that it works; rather, one should start with the assumption that the program contains errors and then test the program to find as many of the errors as possible

Testing is the process of executing a program with the intent of finding errors

Why software Testing?

Software testing helps to deliver quality software products that satisfy user’s requirements, needs and expectations. If done poorly,

1. Defects are found during operation,
2. It results in high maintenance cost and user dissatisfaction
3. It may cause mission failure
4. Impact on operational performance and reliability

Defect Life Cycle

Monday, June 20, 2011

Testing Stages and Types


Unit Testing
The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as we expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use

Integration Testing 
Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. Eventually all the modules making up a process are tested together.Integration testing identifies problems that occur when units are combined. 

We can do integration testing in a variety of ways but the following are three common strategies:

The top-down approach to integration testing requires the highest-level modules be test and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. Disadvantage of top-down integration testing is its poor support for early release of limited functionality. 

The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. Like the top-down approach, the bottom-up approach also provides poor support for early release of limited functionality. 

The umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern. The outputs for each function are then integrated in the top-down manner. The primary advantage of this approach is the degree of support for early release of limited functionality. It also helps minimize the need for stubs and drivers.

System Testing Types
The system test phase begins once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test, especially with the top-down method.

1. Regression Testing
Rerunning existing tests against the modified code to determine whether the changes break anything that worked prior to the change. 

To make the Regression Testing Cost Effective and yet ensure good coverage one or more of the following techniques may be applied:   

(a) Test Automation: If the Test cases are automated they may be executed using scripts after each change is introduced in the system. The execution of test cases in this way helps eliminate oversight, human errors. It may also result in faster and cheaper execution of Test cases. However there is cost involved in building the scripts.
    (b) Selective Testing: Some Teams choose execute the test cases selectively. They do not execute all the Test Cases during the Regression Testing. They test only what they decide is relevant. This helps reduce the Testing Time and Effort
      2. Installation Testing
      Installation testing is a kind of quality assurance work in the software industry that focuses on what customers will need to do to install and set up the new software successfully. The testing process may involve full, partial or upgrades install/uninstall processes. This testing is typically done by the software testing engineer in conjunction with the configuration manager 

      3. Fail-over Recovery Testing 
      Fail over and Recovery testing verifies product in terms of ability to confront and successfully recover from possible failures, arising from software bugs, hardware failure or communication problems (eg network failure). The objective of this test is to check the system restore (or duplicate the main functional systems), which, in the event of failure, ensure the safety and integrity of the data product being tested.
        4. Performance Testing
        Performance testing is intended to determine the responsiveness, throughput, reliability, and/or scalability of a system under a given workloadPerformance testing is typically done to help identify bottlenecks in a system, establish a baseline for future testing, support a performance tuning effort, determine compliance with performance goals and requirements, and/or collect other performance-related data to help stakeholders make informed decisions related to the overall quality of the application being tested
          5. Stress Testing
          Stress testing is the process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions. When conducting a stress test, an adverse environment is deliberately created and maintained.

          Actions involved may include:
          (i) Running several resource-intensive applications in a single computer at the same time
          (ii) Attempting to hack into a computer and use it as a zombie to spread spam
          (iii) Flooding a server with useless e-mail messages
          (iv) Making numerous, concurrent attempts to access a single Web site
            6. Load/Volume Testing
            Load tests measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates. Load testing must be executed on “today’s” production size database, and optionally with a “projected” database. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed SLA's

              7. Security Testing  
              Security testing is a process to determine that an information system protects data and maintains functionality as intended.
              The six basic concepts that need to be covered by security testing are:   ANACIA
              Authentication, Non-repudiation, Availability, Confidentiality, Integrity, Authorization

              Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from.
                8. Acceptance Testing 
                Acceptance testing is a final stage of testing that is performed on a system prior to the system being delivered to a live environment.  Acceptance tests are generally performed as "black box" tests. User acceptance testing (UAT), is the term used when the acceptance tests are performed by the person or persons who will be using the live system once it is delivered. The UAT acts as a final confirmation that the system is ready for go-live. A successful acceptance test at this stage may be a contractual requirement prior to the system being signed off by the client