From: Dani Eder (firstname.lastname@example.org)
Date: Wed Jan 29 2003 - 14:30:11 MST
--- Ben Goertzel <email@example.com> wrote:
> 4) Check the software program against the
> specification carefully, using
> repeated code inspections, unit tests, and black and
> white box system tests.
> If this approach is followed, one will have software
> that is fantastically
> more reliable than most software produced this way.
This is very close to what we do developing
software for the Space Station. We have 8 people
in the Requirements group, who write the
specifications, then check the code and the
test suite fulfills the specifications. We have
12 designers who actually write the code, and
finally we have 32 people (including myself) who
test the software. The stuff we are developing
is the lower level control software that directly
controls fans and pumps, and reports temperature
and atmosphere composition. The higher tier,
which involves astronaut interface, mission
control, and programmed sequences of commands,
is done elsewhere. Most of our testing is at
the level of one function in one CPU's software
being exahustively tested for every possible
For example, there is a valve that can shut off
the flow of cooling water in a space station
module (critical components are water cooled so
they can continue to operate even if the module
loses it's atmosphere). The valve is controlled
by signals from one of the onboard computers, and
sends back 2 sensor signals from two contacts,
one that is made when the valve is closed, and
the other when the valve is open. We test for
all four possible sensor combinations (open,
closed, transition (no contact on either sensor),
and fault (contact on both sensors, which should
be physically impossible).
We tested each command for each valve state (i.e.
issue a close valve command when the valve is
At each stage the other groups review each others
work. So when the specifiction is being written,
design and test consider whether they can write
and test to that spec. Conversely at the test
end, requirements and design consider whether our
test suite adequately covers the specified
performance, and exercises the flight code.
> Why is this well-known approach not followed very
> Firstly COST ...
> secondly IGNORANCE
> The formal approach takes a long time, and it takes
> expensive people (people
> with a grasp of math as well as programming). It
> takes more $$ spent on
> software testing than most companies want to spend.
Space Station software has been in development for
10 years, and it isn't cheap. On the plus side,
it's been functioning in orbit for 4 years with
no major problems. Yes, there have been bugs, but
they have been minor.
In addition to being very careful with the software,
we were very careful to design critical systems
(where a failure would kill the crew or destroy
the station) so that no single computer, software,
human, or hardware failure would by itself be
critical. We also took into account statistical
failure rates for equipment, and added backup units
where needed to bring the risk down to the
required level. The major remaining risk, which
we couldn't do much about, was natural and manmade
objects running into the station. There's about
a 1-2% chance per year of a hull penetration by
such objects, and the effect of something big
enough to punch through the 1 cm aluminum walls
is at least the energy equivalent of a hand
grenade. And most anywhere in the Station, behind
the pressure shell there is equipment of some
kind - so the effect will be bits of hardware
flying everywhere. That's the main reason
for an emergency escape capsule at the Station.
The poor soul who happened to be in a module when
something penetrates it will be dead from
(a) shockwave, (b) shrapnel, and (c) loss of
pressure. The rest of the crew is supposed to
close the hatches behind them, and get out in
the escape capsule.
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT