Lecture Notes for CS 325

Implementation Validation and Testing, 1 March 2000


  1. verification

    1. objectives - code works correctly, code does the correct thing

    2. static and dynamic code verification

    3. static techniques don't require execution

      1. pluses - simpler, can be automated, handles incomplete code

      2. minuses - results less useful

    4. static and dynamic techniques find different errors

    5. code reading

      1. re-abstract the code and compare to the design spec

      2. a bottom-up procedure

      3. desk checking to find errors

    6. code reviews

      1. more formal, multi-person

      2. a follow-up technique after other verification and before testing

      3. designers, implementors, testers

      4. searching for inconsistencies between design and code

      5. errors in logic, control, data, and operations

      6. also non-functional issues such as performance

    7. static analysis

      1. tool-based approach to static verification

      2. cheap because of automation, but effective for the errors it detects

      3. much of static analysis compiler based, data-flow analysis

      4. assigned to but not used, used but not assigned to

      5. other oddities - unused variables, dead code

      6. simple, fast analysis of limited help

      7. more extensive, useful analysis can be expensive and limited - alias analysis

      8. re-write code to enable static analysis - smaller and simpler

      9. other static analysis tools

        1. inter-module consistency checkers - like c or c++ prototypes

        2. cross-reference generators or automatic diagrammers

        3. feature use frequencies

        4. enforce or evaluate coding styles

      10. symbolic execution

        1. simulates real execution with fake data

        2. expensive even for small, simple code

        3. can establish strong and important characteristics of the code - undo polymorphism in oo language

      11. proving correctness

        1. more of an avoidance technique than a detection technique

        2. useful during the input subprocess to design code

        3. eiffel uses a variant of program correctness as design by contract
      dynamic techniques - unit testing

  2. metrics

    1. most prior metrics are related to estimating code characteristics

    2. automated measurements

    3. size metrics

      1. measuring size is not simple - loc

        1. dloc per working month; kloc total

        2. total lines; lines of code; libraries, macros, program generators

      2. halstead metrics

        1. nopr - number of different operators used

        2. nopn - number of different operands used

        3. vocabulary n = nopr + nopn

        4. Nopr - total number of operators used

        5. Nopn - total number of operands used

        6. length N = Nopr + Nopn

        7. volume V = N log2 n

        8. high correlation with loc, errors

        9. some questions as to what operators and operands are

      3. complexity metrics

        1. size is a well correlated measure of complexity

        2. cyclomatic complexity based on number of decisions

        3. cyclomatic complexity adjusted for decision complexity

        4. halstead measures = average operator frequency*average operand frequency

        5. live variables

          1. measure the fan-in to a statement

          2. higher fan-in means more complexity

          3. live spans - larger spans mean more complexity

        6. knot count - count un-structured programming control flows

      4. style metrics

        1. size of modules, procedures, statements, variable names

        2. counts of variables, constants, comment lines, gotos

        3. comparison against historical data

        4. ranges of acceptable metric values


This page last modified on 15 March 2000.