Re: Seed AI (was: How hard a Singularity?)

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Jul 02 2002 - 22:45:12 MDT


Eliezer S. Yudkowsky wrote:

> You can only build Friendly AI out of whatever building blocks the AI is
> capable of understanding at that time. So if you're waiting until the
> AI can read and understand the world's extensive literature on
> morality... well, that's probably not such a good idea. (In the
> foregoing sentence, for "not such a good idea" read "suicide".) Once
> the AI can read the world's extensive literature on morality and
> actually understand it, you can use it as building blocks in Friendly AI
> or even try to shift the definition to rest on it, but you have to use
> more AI-understandable concepts to build the Friendliness architecture
> that the AI uses to get to that point.
>

Eliezer,

That last sentence didn't parse. Want to have another go at it?
    It is not clear to me that the architecture required to
understand world literature and philosophy regarding morality is
necessarily an architecture already based on, steeped in,
supporting Friendliness. My apologies if my attempted parsing
is to far from what you intended.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT