Re: friendly ai

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 28 2001 - 12:13:10 MST


Ben Goertzel wrote:
>
> Pointing to Buddhism was just a way of saying that friendliness, in humans,
> does not inevitably seem to have learning & knowledge creation as subgoals

The globelike shape of the Earth, in humans, is not an inevitable
conclusion from satellite photos. That's why my original post specified
that it was an inevitable conclusion for *transhumans* only.

Do you seriously think that a Friendly AI which totally lacked the
behaviors and cognitive complexity associated with learning would be more
effective in making Friendliness real?

Ergo, the behaviors associated with learning are valid subgoals of
Friendliness.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT