IEEE Standard 829-1998 for Software Test Documentation summary report
IEEE standard
829-1998 covers test plans in section 4, test
designs in section 5, test cases in section 6, test logs in section 9, test
incident reports in section 10, test
summary reports in section 11, and other material that I have decided
not to summarise in the other sections.
Beware. This is only a summary. Anyone interested in claiming conformance to
the standard should read the standard. I advise people interested in test
documentation to read the standard, including the extensive examples, first.
You may well decide that the full paper trail is too much paper for your needs,
and decide to use the ideas in the standard selectively. The key point is to have test cases organised coherently,
to do testing, to log what
happens, and to think about the outcomes.
A test plan answers the questions
- WHAT is to be tested,
- HOW it is to be tested,
- WHO is to do the testing,
- WHAT resources they will need,
- WHEN they will do it, and
- WHAT can go wrong?
A test plan has the following parts, in this order.
1.
Test plan
identifier. A unique label so you can refer to that document.
2.
Introduction. Outlines what is to be tested. The top level test plan should point to
related documents such as project plan, quality assurance plan, configuration
management plan, standards. Lower-level plans should point to their parents. I
suggest using hypertext links to link test plans in temporal order, and to
point to any relevant material.
3.
Test items. What is to be tested? Be explicit about version. Say how to get the
test items into the test environment. Point to whatever documentation of the
test items exists. Point to any "incident reports".
4.
Features to be
tested. Say which features and combinations of features
are to be tested. You need not cover all the features of one test item in one
test plan.
5.
Features not to be
tested. If you don't cover all the features of a test
item, you should say which ones you left out and why.
6.
Approach. Describe what is to be done in enough detail that people can figure out
how long it will take and what resources it will require. What tools will you
need? How thorough will testing have to be? How can you tell how thorough it
was? What might get in the way?
7.
Item pass/fail
criteria. How will you know whether a test item has passed
its tests?
8.
Suspension criteria
and resumption requirements. When is it ok to
stop this test for a while? What will you have to do when you start again?
9.
Test deliverables. What documents should the testing process deliver? Logs, reports, test
input and output data, the things described in this summary and a few more. You
decide what you need.
10.
Testing tasks. What must be done to set up the test? What must be done to perform the
test? What has to be done in what order?
11.
Test environment
needs. What must the test environment look like? What
would it be nice to have? Tools? People? Building space? Bandwidth? How will
these needs be met?
12.
Responsibilities. Who does what?
13.
Staff and traning
needs. How many people with what skills will you need? If
there aren't enough people with the required skills, how are they going to get
them?
14.
Schedule Define milestones, estimate times, book resources.
15.
Risks and
contingencies. What are you assuming that could go wrong? What
contingency plans to you have?
16.
Approvals. Which people must approve the plan? Get their signatures.
A test design spells out what features are to be
tested and how they are to be tested. It includes the following parts, in this
order:
1.
Test design
specification identifier. A unique label so
you can refer to that document. Point to the test plan.
2.
Features to be
tested. Point to the requirements for each feature or
combination of features to be tested. Mention features that will be used but
not tested.
3.
Approach
refinements. Spell out how the test is to be done. What
techniques? How will results be analysed? What setup will be needed for test
cases?
4.
Test
identification. Point to the test cases, with short descriptions.
(Some test cases might be part of more than one design or plan.)
5.
Feature pass/fail
criteria. Spell out how you will tell whether a feature has
passed its tests.
A test case is a single test you can run. The
document has the following parts, in this order:
1.
Test case
identifier. A unique label so you can refer to that document.
Point to the test plan/design.
2.
Test items. List the items and features you will check. Point to their
documentation.
3.
Input
specifications. Describe all the information passed to the test
item for this test. [Either point to files, or include the information in such
a way that it can be automatically extracted.]
4.
Output
specifications. Describe all the behaviours required, including
non-functional requirements like time, memory use, network traffic. Provide
exact values if you can.
5.
Test environment
needs. What hardware, software, and other stuff do you
need?
6.
Special procedural
requirements. Any special setup, user interaction, or tear-down
actions?
7.
Inter-case
dependencies. What other test cases must be done first? Point to
them. Why must they be done first?
Test procedure
specification
Test item transmittal report
A test log answers the question "what happened
when testing was done?" [As much as possible, this should be automated.] A
test log includes the following sections in the following order:
1.
Test log
identifier. A unique label so you can refer to that document.
Point to the test case.
2.
Description. Information common to all the items in the log goes here. What was
tested (with versions)? What was the environment? Other documents say what was supposed
to happen. This says what did happen.
3.
Activity and event
entries. Beginning/ending timestamps and name of actor for
each activity. Point to the test procedure. Who was there and what were they
doing? What did you see? Where did the results go? Did the test work? When
something surprising happened, what was going on just before, what was the
surprise, and what did you do about it? Point to incident reports, if any.
If anything happens that should be looked into
further, a test incident report should be written. It should contain the following
sections in the following order:
1.
Test log
identifier. A unique label so you can refer to that document.
2.
Summary. Briefly, what happened? Point to the test case and test log and any
other helpful documents.
3.
Incident
description. A detailed description of what happened. See the
standard for a list of topics.
4.
Impact. What effect will this have on the rest of the testing process? How
important is it?
A test summary report doesn't just summarize what
happened, it comments on the significance of what happened. It should contain
the following sections in the following order:
1.
Test summary report
identifer. A unique label so you can refer to that document.
2.
Summary. Summarise what was tested and what happened. Point to all relevant documents.
3.
Variances. If any test items differed from their specifications, describe that. If
the testing process didn't go as planned, describe that. Say why things
were different.
4.
Comprehensiveness
assessment. How thorough was testing, in the light of how thorough
the test plan said it should be? What wasn't tested well enough? Why not?
5.
Summary of results. Which problems have been dealt with? What problems remain?
6.
Evaluation. How good are the test items? What's the risk that they might fail?
7.
Summary of activities. In outline, what were the main things that happened? What did they cost
(people, resource use, time, money)?
8.
Approvals. Who has to approve this report? Get their signatures.
