Re: Seed AI (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 01 2002 - 21:58:11 MDT


Lee Corbin wrote:
>>
>> That said, a mind in general would not by default be predisposed to
>> empathize. With training, though, I think it would be possible to
>> teach empathy content and then empathy skills. Short of that,
>> empathy could always be programmed in (and in fact empathy may
>> arise in an self-improving AI out of it's ability to understand
>> itself in order to improve itself, but that may depend on the
>> specifics of the architecture).
>
> Now *that* I understand! So wouldn't it make sense to *first* have
> an AI *understand* what many people have believed to be "moral" and
> "ethical" behavior---by reading the world's extensive literature---
> before attempting to engage in moral behavior, or even to try to
> empathize?

You can only build Friendly AI out of whatever building blocks the AI is
capable of understanding at that time. So if you're waiting until the
AI can read and understand the world's extensive literature on
morality... well, that's probably not such a good idea. (In the
foregoing sentence, for "not such a good idea" read "suicide".) Once
the AI can read the world's extensive literature on morality and
actually understand it, you can use it as building blocks in Friendly AI
or even try to shift the definition to rest on it, but you have to use
more AI-understandable concepts to build the Friendliness architecture
that the AI uses to get to that point.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT