Lecture Notes for CS 325
Requirements Validation and Metrics, 31
January 2000
- objectives
- validation - external
requirements behavior; depends on customer and others, such as the fda or the nrc
- verification - internal requirements
behavior; depends on method
- metrics - measures of process,
project, requirements
- motivation
- correctness
- do the requirements say what the customer and we want; some good
things
- errors - if not, where's the problem; no bad things
- error types - omission, incorrect facts, inconsistency,
ambiguity
- percentages vary from project to project
-
the classification itself is important to capture statistics
- control and prediction - what software engineering is
about
- requirements validation - external behavior
- requirement reviews
- stakeholder reviews - client,
developers, lawyers, ...
- single group or multi-group reviews
- review formality
- review aids - check lists, questions,
previous statistics
- reviews are effective at catching errors
- other review techniques
- reading - classic
textual review
- scenarios - using the spec to answer questions
about possible uses
- prototyping - use the spec to build a
prototype
- requirements verification - internal
behavior
- difficult, due to lack of formal approaches to
requirements
- same tools as validation, except the stakeholders
are largely internal
- can I design, implement, test, ...
from these requirements
- how long is it going to take, how much
risk, what tools, ...
- but remember, it's a problem statement,
not a design, implementation, test, ... plan
- automated
cross referencing and general feature extraction
- checks
small scale, internal details - terms defined before use
-
applicable to structure-generating analysis techniques
- nasa's
paragraph indent checker
- effective to a small scale
- requirements metrics
- measure characteristics of
the requirements process and document
- more accurately
predict and control the current project
- improve the process
model in general
- size metrics - a shaky relation between
the size of the specification and the size of the project
-
text measurements - paragraphs, pages
- characterize problem size
independent of specification
- harder problems should lead
to bigger, longer, harder projects
- but what is harder - defined
by problem, or requirements method
- function
points
- a problem is
hard if it manipulates a lot of complex data within a complex
environment
- oriented towards information systems
- five
i-o types, each weighted by complexity
- unadjusted (raw)
function points (ufp) - weighted sum of i-o types
- complexity
adjustment factor (caf) measures environmental complexity
-
14 characteristics, each weighted by one of six levels, summed to get N
- caf = 0.65 + 0.01N
- delivered function points
(dfp) = caf*ufp
- reasonably accurate estimator of project size
and cost
- one dfp equals 100 lines of cobol or 80 lines of
pl1
- active work to extend fp to other types of systems
- bang metric
- a problem is hard if the
lowest-level dfd has a lot of complex transformations operating on
complex data
- measures bits of data per transform
- quality metrics - how good is the specification
-
srd error count
- compare with historical data to measure
goodness
- determine latent errors
- change request
frequency - both within and after the specification process
-
quality attributes - highly suspect, but they're numbers
-
ambiguity measurements, cross-product numbers
This page last modified on 5 February 2001.