Re: Seed AI (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 02 2002 - 10:01:37 MDT


Lee Corbin wrote:
> Eliezer writes
>>
>>You can only build Friendly AI out of whatever building blocks the
>>AI is capable of understanding at that time. So if you're waiting
>>until the AI can read and understand the world's extensive literature
>>on morality... well, that's probably [suicide].
>
> I'm pretty sure that Cyc's goal has been for a long time to be
> able to read random newspapers, stories, and net info. It's
> probably the common urge (as it would be with me) to first get
> the program capable of understanding, and only then to permit
> it to affect the world. And, of course, before taking that
> last giant step, implementing friendliness would be necessary.
>
> Your reordering of these priorities, if I understand you right,
> is most interesting, placing Friendliness first. I'll probably
> have to study your architecture to understand how friendliness
> could arise before understanding.

Friendliness cannot arise in the absence of all understanding; "you can
only build Friendly AI out of whatever building blocks the AI is capable
of understanding at that time". The kind of understanding that both
enables and *requires* Friendly AI content would probably come
considerably before a strong understanding of our world, the one we
chauvinistically and shortsightedly call "real life". Furthermore, for
an FAI to read the massed moral writings of humanity and come to
philosophical conclusions therefrom would require a structurally
complete FAI architecture and a substantial amount of existing FAI content.

I also agree that a Friendly AI would be wisest to hold off on making
its own real-world plans in pursuit of long-term goals until it has a
very strong understanding of the real world, and even though the AI
might be able to be somewhat more confident by borrowing the judgement
of the programmers, the programmers might be wise to avoid affecting the
world too much until the AI is grown up. But Friendliness is one of the
foundations of a Friendly AI's general intelligence - it can't be
slapped on afterward. Building a Friendly AI as described in CFAI means
building a general intelligence with certain structural semantics in the
goal system. You can no more build a general intelligence and add on
Friendliness later than you can build a human-architecture brain
newspapers and add on pleasure and pain later.

Or you could add on Friendliness later, maybe, if the AI weren't already
grown up to the point of being able to resist you, but it would be a
major scramble in terms of rewriting code and recreating content and
propagating changes and invalidating dependencies and so on. There's
just no reason to go there. And an AI that can read and fully
understand arbitrary newspapers or moral writings probably *is* grown up
to the point of being able to resist you.

Certain structural characteristics in a goal system architecture are
necessary to understand the statement: "I built you, now please let me
modify you."

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT