Some comments on the test drafts.


R. Clayton (rclayton@monmouth.edu)
Thu, 30 Mar 2000 20:45:46 -0500 (EST)


  These comments are in no particular order. I'll be around all day tomorrow
(Friday, 31 March) if you want to pick up your drafts or talk further.

  Your test cases should specify dynamic tests. Most of the testing done up to
the testing phase has been static (walkthroughs and reviews, mostly); the
testing phase is where the resources are budgeted for dynamic testing. This is
not to say static testing is unimportant in the testing phase (you could use
it, for example, to help generate test cases for structural testing), but
almost all effort in in the testing phase should be devoted to dynamic testing.

  Test descriptions should be oriented around the expected results, not the
inputs to be used in testing or the error cases that might occur. The expected
results (or their lack) are the ends of the test; the inputs are the means by
which the ends are achieved. By orienting test descriptions around inputs to
be used, you don't adequately describe the test's ends, and you limit the means
by which you can achieve the ends. For example, the test description

    Determines if eval() accepts two operators as input.

doesn't make it clear what this test does (what is "accept"?) and why it's
important; it also leaves open the question of three-operator input. A better
alternative would be

    Determine if eval() correctly calculates expressions containing more than
    one operator.

  Because of the huge number of ways failure can occur, it's easier to specify
tests in terms of the more limited ways success can occur. For example, what
happens when the test is to

    Determine if eval() coredumps when a very large string is input.

but eval(), rather than coredumping, runs forever. Did eval() pass the test? -
after all, it didn't coredump. A better alternative is

    Determine if eval() can correctly calculate an expression contained in a
    string of 1024 characters or more.

  Write useful test descriptions. A test that

    Determines if eval() returns the correct result.

isn't very helpful to either the test designer or the test evaluator.

  Write useful test result descriptions. Suppose you were responsible for
maintaining module eval, and you got the following test report (on a Friday
afternoon, ten minutes before you were going to leave for home):

    Tester inspected module eval and found it to be non-compliant with
    requirements.

What are you to do next?

  Make sure the terms you use are defined, including the usual "proper output"
or "correct results," as well as the more esoteric "standardize compiler
procedure."

  Some terminology:

    "test items" are the modules to be tested - the system, the eval module,
    the eval() function - and not the data you're going to use to do the
    testing.

    "test deliverables" are the artifacts produced as a result of the testing
    process - the test cases and the test reports, for example - and not the
    artifacts needed to conduct the test - that is, the system and its
    documentation.

  Make sure your test document is clear. One test document had listed under
"output specifications" an input of "15*5" and an output of "25." It wasn't
clear if this was something an oracle is supposed to use to determine
correctness, or if it was the result of running a test.

  Here's an example test plan and test case specification:

    Test Plan

      Test units
   
        The unit under test is the sum() procedure.

      Features to be tested

        The arithmetic operators: does sum() recognize all legal arithmetic
        operators and perform the correct operation on each?

      Approach for testing

        Structural testing using an all-c-uses criteria.

      Test Deliverables

        Test case specification

        Test case results report

    Test Case Specification

      The table headers

        Test Data - the the three parameters to the sum() procedure.
        Condition - the condition under test
        Expected Output - what it says
        Coverage - the c-use statements executed by the test data
        
      Test Test Condition Expected Coverage
      Number Data Output

       1 3, 3, "+" addition recognized 6 1, 3
       2 3, 3, "-" subtraction recognized 0 1, 7
       3 3, 3, "*" multiplication recognized 9 1, 11
       4 3, 3, "/" division recognized 1 1, 15

      Test case coverage is 100% - all uses of all variables in any
      computation are executed by the test data.



This archive was generated by hypermail 2.0b3 on Thu Mar 30 2000 - 20:50:05 EST