From: Eliezer S. Yudkowsky (email@example.com)
Date: Wed Feb 27 2002 - 06:48:24 MST
Eugene Leitl wrote:
> Your first fielded alpha must demonstrate robust (i.e. not dying an
> instant death) modification of its own code base as a first milestone.
Uh, not true. A seed AI is fundamentally built around general intelligence,
with self-improvement an application of that intelligence. It may also use
various functions and applications of high-level intelligence as low level
glue, which is an application closed to humans, but that doesn't necessarily
imply robust modification of the low-level code base; it need only imply
robust modification of any of the cognitive structures that would ordinarily
be modified by a brainware system.
The milestones for general intelligence and for self-modification are
independent tracks - though, of course, not at all independent in any actual
sense - and my current take is that the first few GI milestones are likely
to be achieved before the first code-understanding milestone.
It's possible, though, that I may have misunderstood your meaning, since I
don't know what you meant by "first fielded alpha". You don't "field" a
seed AI, you tend its quiet growth.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:25 MDT