Artifacts & Defects


Test Artifacts comparises

  • Test  Plan
  • Test Case
  • Test Script
  • Test Data
  • Test Suite
  • Test Harness
  • Test Report


Test Defects

  • Defects
  • Defect Severity
  • Defect Probability
  • Defect Priority
  • Defect Life Cycle
  • Defect Report

A software Test Plan is a document describing the testing scope and activities. It is the basis for formally testing any software /product in a project.

Test plan is a document describing the scope,approach,resource and schedule of intended test activities.  It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, any risks requiring contingency planning. It is a record of the test planning process.

  • Master test plan: A test plan that typically addresses multiple test levels.
  • Phase test plan: A test plan that typically addresses one test phase.

 

Reference:  http://www.softwaretestingfundamentals.com/defect/

 

Defect management video :

Test Process


Test Process

Test Process

The five steps in the Test Process are:
  • Test Planning
  • Test Analysis and Specification
  • Test Execution
  • Test Recording (Verification)
  • Checking for Completion.

Test Planning:  The Test Plan describe how the Test Strategy is implemented.

Involves producing a document that described an overall approach and include test objectives.

Test Plan

Test Plan

Contents of a Test Plan Includes:

1. Background  2. Reference documents    3. Approach      4. Method

5.  Timetable     6. Resources        7.   Dependencies

8.  Report     9. Test Asset Identification           10. Exit Criteria

  • The most critical stage of the process
  • Effort spent now will be rewarded later
  • The foundation on which testing is built

Test Specification (Sometimes referred to as test design)

  • Preparation & analysis
  • Building or design test conditions and  test cases using recognized test techniques
  • Define expected results

Test preparation: Analyse the Application, Identify good test conditions, Identify tes cases, Document thoroughly and cross refercing among the team.

Building Test Cases: Test cases comparise of standard data, transaction data, actions and expected results

Test Cases Vs Expected Results

Test Cases Vs Expected Results

 Expected Results:
  • The outcome of each action
  • The state of the application during test and after test
  • the state of the data during test phase and after the test phase

Test Execution

  • Test execution schedule / log
  • Identify which tests are to be run
  • Test environment primed & ready
  • Resources ready, willing & able
  • Back-up & recovery procedures in place
  • Batch runs planned and scheduled

If all this are in place then we are ready to run carry out our tests

Test Script

Test Recording: involves keeping good records of the test activities that you have carried out.

For example , version of the software you have tested and the test specifications are recorded, along with the actual outcomes of each test.

     Test verification:

  • If  our planning and preparation is sufficiently detailed  this is the easy part of software testing
  • The test is run to verify the application under test
  • The test itself either passes or fails!

       The test log should record:

  • Software and test version control
  • Specifications used as test base
  • Test timings
  • Test result ( Actual results and Expected results)
  • Defect details for erroneous tests

Test Completion

        Test Exit Criteria

        Used to determine when to implement the software

  1. Budget used
  2. Defect detection rate
  3. Performance satisfactory
  4. Test Coverage
  5. Key Functionality tested
Completion or exit criteria are used to determine when testing (at any stage) is complete.  These criteria may be define in terms of cost, time, faults found or coverage criteria.
Coverage criteria  defined in terms of items that are exercised by test suites, such as branches, user requirements, most frequently used  transaction etc.
Tools such as code coverage monitors are use ascertain how many lines of code have been executed.