Lecture Notes for CS 325

The Testing Process, 29 March 2000


  1. the testing process - first crack at the whole system, last crack at finding errors

  2. regression testing - the need to keep on testing

  3. costly and difficult - budget 25% to 40% of total resources to testing

  4. how to test

    1. different test types are good at catching different things

    2. different test types are good on different types of code

    3. match tests to expected errors and system locations

    4. combine various test types at various intensities to achieve overall coverage objectives

    5. economic criteria are often definitive - $/fault found

  5. when to test

    1. unit, integration, system, and acceptance testing

    2. unit testing, done during coding by coders in isolation

    3. integration testing, done to test the system design

    4. system testing, done to test the system requirements

    5. acceptance testing, done by the client to test the development

    6. regression testing - you never really stop testing

      1. the initial testing establish a baseline against which system changes can be retested

      2. an enormous amount of documentation is needed for regression testing - test plan, test cases, test scripts, test results, test analysis

      3. managing all this documentation is another big problem - configuration management

  6. the test plan - defines the whole scope of testing for the project

    1. parallel and during the project development

    2. describes testing units, tested features, testing approaches, test deliverables, schedule, personnel

    3. test units comprise modules to be tested and the test data

    4. defining the features to be tested

    5. determining the testing criteria guiding the tests

    6. what will assure the customer of sufficient testing

    7. schedule testing and its affects on other parts of development

    8. how many people of what type doing what

  7. test case specifications - how does each test get carried out

    1. a refinement of the test plan

    2. determine test case criteria and generate appropriate test cases

    3. measure the test cases against the criteria

    4. review the test cases

    5. writing documentation - input, process, output, analysis

    6. test case specifications are the script for actual testing

  8. testing and analysis

    1. analyze the test case specifications

    2. construct the testing harness - stubs, drivers, data collectors

    3. documentation - test log, test summary report, error report

    4. post-mortem analysis - determining the test case spec effectiveness

      1. faults per test and faults overall

      2. testing effort as a portion of over-all effort

      3. computational resources

      4. punishment for the guilty - tracking faults back to errors


This page last modified on 27 March 2000.