Re: Seed AI milestones (was: Microsoft aflare)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 27 2002 - 08:27:30 MST


Eugene Leitl wrote:
>
> On Wed, 27 Feb 2002, Eliezer S. Yudkowsky wrote:
>
> > Uh, not true. A seed AI is fundamentally built around general
> > intelligence, with self-improvement an application of that intelligence.
>
> I'm of course disagreeing that general intelligence is codable by a team
> of human programmers due to the usual complexity bareer, but for the sake
> of argument: let's assume it's feasible. It seems that introspection and
> self manipulation requires a very specific knowledge base -- compare, for
> instance a human hacking human brain morphology box, or fiddling with the
> genome for the ion channels expressed in the CNS. This is nothing like
> navigating during a pub crawl, or building a robot. Very different
> knowledge base.

Yes, acquiring the domain competency of programming requires acquiring
domain expertise in programming.

> > It may also use various functions and applications of high-level
> > intelligence as low level glue, which is an application closed to
> > humans, but that doesn't necessarily imply robust modification of the
> > low-level code base; it need only imply robust modification of any of
> > the cognitive structures that would ordinarily be modified by a
> > brainware system.
>
> So you're modifying internal code, and are dumping that into low level
> compilable result? You could cloak brittleness that way (assuming, you
> design for it), but you'd lose efficiency, and hence utilize the hardware
> (which is not much to start with, even a decade from now) very badly,
> losing orders of magnitude of performance that way.

Hm, I'm not sure what you interpreted my statement above as meaning. I
meant that consciously modifying the concept of redness is a different
internal application from consciously modifying the source code implementing
the visual cortex.

> > It's possible, though, that I may have misunderstood your meaning, since
> > I don't know what you meant by "first fielded alpha". You don't "field"
> > a seed AI, you tend its quiet growth.
>
> I understood that it is the function of the seed AI to achieve criticality
> regime, after which humans could not/should not be able to intervene even
> correctively.

Criticality is the very, very *last* milestone, for obvious reasons. It
comes long after the AI first becomes able to modify a few pieces of code.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT