Lecture Notes for CS 325

Requirements Validation and Metrics, 31 January 2000


  1. objectives

    1. validation - external requirements behavior; depends on customer and others, such as the fda or the nrc

    2. verification - internal requirements behavior; depends on method

    3. metrics - measures of process, project, requirements

  2. motivation

    1. correctness - do the requirements say what the customer and we want; some good things

    2. errors - if not, where's the problem; no bad things

      1. error types - omission, incorrect facts, inconsistency, ambiguity

        1. percentages vary from project to project

        2. the classification itself is important to capture statistics

    3. control and prediction - what software engineering is about

  3. requirements validation - external behavior

    1. requirement reviews

      1. stakeholder reviews - client, developers, lawyers, ...

      2. single group or multi-group reviews

      3. review formality

      4. review aids - check lists, questions, previous statistics

      5. reviews are effective at catching errors

    2. other review techniques

      1. reading - classic textual review

      2. scenarios - using the spec to answer questions about possible uses

      3. prototyping - use the spec to build a prototype

  4. requirements verification - internal behavior

    1. difficult, due to lack of formal approaches to requirements

    2. same tools as validation, except the stakeholders are largely internal

      1. can I design, implement, test, ... from these requirements

      2. how long is it going to take, how much risk, what tools, ...

      3. but remember, it's a problem statement, not a design, implementation, test, ... plan

    3. automated cross referencing and general feature extraction

      1. checks small scale, internal details - terms defined before use

      2. applicable to structure-generating analysis techniques

      3. nasa's paragraph indent checker

      4. effective to a small scale

  5. requirements metrics

    1. measure characteristics of the requirements process and document

      1. more accurately predict and control the current project

      2. improve the process model in general

    2. size metrics - a shaky relation between the size of the specification and the size of the project

      1. text measurements - paragraphs, pages

      2. characterize problem size independent of specification

        1. harder problems should lead to bigger, longer, harder projects

        2. but what is harder - defined by problem, or requirements method

      3. function points

        1. a problem is hard if it manipulates a lot of complex data within a complex environment

        2. oriented towards information systems

        3. five i-o types, each weighted by complexity

        4. unadjusted (raw) function points (ufp) - weighted sum of i-o types

        5. complexity adjustment factor (caf) measures environmental complexity

          1. 14 characteristics, each weighted by one of six levels, summed to get N

          2. caf = 0.65 + 0.01N

        6. delivered function points (dfp) = caf*ufp

        7. reasonably accurate estimator of project size and cost

          1. one dfp equals 100 lines of cobol or 80 lines of pl1

        8. active work to extend fp to other types of systems

      4. bang metric

        1. a problem is hard if the lowest-level dfd has a lot of complex transformations operating on complex data

        2. measures bits of data per transform

    3. quality metrics - how good is the specification

      1. srd error count

        1. compare with historical data to measure goodness

        2. determine latent errors

      2. change request frequency - both within and after the specification process

      3. quality attributes - highly suspect, but they're numbers

        1. ambiguity measurements, cross-product numbers


This page last modified on 5 February 2001.