From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Feb 27 2002 - 08:06:31 MST
On Wed, 27 Feb 2002, Eliezer S. Yudkowsky wrote:
> Uh, not true. A seed AI is fundamentally built around general
> intelligence, with self-improvement an application of that intelligence.
I'm of course disagreeing that general intelligence is codable by a team
of human programmers due to the usual complexity bareer, but for the sake
of argument: let's assume it's feasible. It seems that introspection and
self manipulation requires a very specific knowledge base -- compare, for
instance a human hacking human brain morphology box, or fiddling with the
genome for the ion channels expressed in the CNS. This is nothing like
navigating during a pub crawl, or building a robot. Very different
> It may also use various functions and applications of high-level
> intelligence as low level glue, which is an application closed to
> humans, but that doesn't necessarily imply robust modification of the
> low-level code base; it need only imply robust modification of any of
> the cognitive structures that would ordinarily be modified by a
> brainware system.
So you're modifying internal code, and are dumping that into low level
compilable result? You could cloak brittleness that way (assuming, you
design for it), but you'd lose efficiency, and hence utilize the hardware
(which is not much to start with, even a decade from now) very badly,
losing orders of magnitude of performance that way.
> The milestones for general intelligence and for self-modification are
> independent tracks - though, of course, not at all independent in any
> actual sense - and my current take is that the first few GI milestones
> are likely to be achieved before the first code-understanding milestone.
Interesting. GI looks a tall order, good luck.
> It's possible, though, that I may have misunderstood your meaning, since
> I don't know what you meant by "first fielded alpha". You don't "field"
> a seed AI, you tend its quiet growth.
I understood that it is the function of the seed AI to achieve criticality
regime, after which humans could not/should not be able to intervene even
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:00:28 MDT