Re: flare and SIAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 29 2001 - 22:38:33 MDT


Ben Goertzel wrote:
>
> From what I understand about Eli's approach, the problem of getting to phase
> two modification isn't broken down into parts in the same way as I propose
> to do.

Actually, it pretty much is. The point where we disagree is about how
hard it is to get to what you call "phase one". The reason why "phase
one" takes Flare or something like it is that initially the AI will be
extremely stupid and the alterations to the code will be more like
mutations and less like general reasoning. General reasoning would work
about equally well with Flare, C++, or assembly. The usefulness of
relatively "stupid" action is *extremely* dependent on programming
language. Thus, Flare is needed so that we can build a stupid AI. If we
had a smart AI, we could work in raw hexadecimal machine code... or
rather, the AI could. The problem is that any AI built will initially be
stupid. The better the programming language, the more suited to
modification by AIs, the dumber the initial AI can be and *still work*.
Consider Flare as the first step on a very large ladder. It's not
impossible to climb the ladder if that step is missing, but you have to
take a very *large* first step in order to climb it. Frankly, I think
that the second step is high enough already, without yanking out the first
step as well.

What you describe as "phase two", and what I have nicknamed (borrowed from
Dan Clemmensen, actually) the "self-optimizing compiler" stage of seed AI,
might better be called the "self-understanding programmer", capable of
pouring its source from Flare into Python, Perl, C++, SMP assembly, or
FPGAs with equal ease. This is not necessarily 90% of the AI substance
needed to get to a hard takeoff - but it could easily turn out to be 90%
of the *time* required, especially if we should be thinking exponentially
instead of linearly. Past the self-optimizing stage, AI development
ceases to resemble programming and begins to resemble chatting with the
AI. I'm not sure there's much of a timewise gap between that and the
Singularity, even if it's a relatively primitive AI. It may take a huge
amount of self-invention to move on from there, but it will be proceeding
very quickly.

As for the notion that human-level AI will be developed, open-sourced, and
then slowly developed into superintelligence over years... Ben, you're in
"hard takeoff denial". (For the historical record, the term "hard takeoff
denial" was invented by Ben Goertzel.)

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT