From: Ben Goertzel (email@example.com)
Date: Fri Aug 09 2002 - 21:40:44 MDT
Having read the web page, but NOT studied the code or tried to use the
system, I think the approach is badly wrong-headed...
The argument that massively parallel brainlike systems are more "reliable"
than algorithmic systems is a play on the multiple senses of the word
Simple, toy systems of this nature may display nice failure-recovery
properties. But really complex massively parallel agent systems can be
terribly hard to debug, as I know from long experience. And the kind of
reliability that brains have is not the kind we want from software systems,
in a mission-critical context. Actually, brains have bad unreliability
problems: they go nuts, they die, they delude themselves, they react
terribly to minor modifications in their chemical substrate, etc.
Also, he claims that algorithmic programming can be replaced by simple
visual programming using his interface. Yeah, right. It's OK for
UI-focused or physical-simulation-focused stuff I guess, but not for serious
AI coding for instance...
I think that the solution to the software reliability problem is clear,
well-known, and less sexy than COSA.
1) Formalize the requirements for the program using mathematics
2) Formalize the software design using a specification language like Z
3) Prove that the specification fulfills the requirements, mathematically
4) Translate the software design into a mathematically simple language like
Haskell or Scheme or ML
5) Prove that the software design correctly implements the formal design
These proofs can be carried out by humans together with automated
All this is known technology. It is not used much in practice, because it
takes more time (hence more money) than doing software development the
We have not taken this approach for Novamente, because we don't consider it
worth the time; we don't need a provably correct system right now. However,
I would like to take this approach for Novamente 2, if there is one (if
Novamente 1 doesn't end up writing Novamente 2 ;).
You'll find this sort of method is used for mission-critical military
software more often than anywhere else, which is sensible.
As AI advances, it'll be possible to efficiently do this kind of
theorem-proving for regular software projects, hence software reliability
should increase due to AI's contributions, even before AI's render human
software engineering obsolete...
-- Ben Goertzel
> -----Original Message-----
> From: firstname.lastname@example.org [mailto:email@example.com]On Behalf
> Of David Hart
> Sent: Friday, August 09, 2002 9:00 PM
> To: firstname.lastname@example.org
> Subject: project COSA
> From: http://home1.gte.net/res02khr/Cosas/COSA.htm
> "COSA is a reactive, signal-based software construction and execution
> environment. The goal of Project COSA is to improve software reliability
> and productivity by at least one order of magnitude ... Software
> creation consists of connecting elementary concurrent objects (cells)
> together ... Cells can be combined into high-level, plug-compatible
> components and/or applications..."
> Self-examination, introspection and self-modification are design
> considerations of COSA.
> What are SL4 list members' opinions on the approach and technical merits
> of COSA? ("Silver Bullet" and other emotive arguments aside)
> David Hart
> CTO, Atlantis Blue
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT