From: Eliezer S. Yudkowsky (email@example.com)
Date: Sun Jan 28 2001 - 13:41:30 MST
Ben Goertzel wrote:
> > Okay... so how come I'm Friendly?
> Because you, eliezer, are not an AI !!!
Right. I'm not. I was generated by evolution and I can't alter my own
source and I'm *still* Friendly.
> You, like all of us, have the whole human evolutionary animal heritage, even
> if you ~are~ a celibate teetotaler weirdo computer nerd ;> ...
> This heritage brings us a lot of problems, but it also brings warmth, love &
Right. So I reified the warmth, love & compassion into a philosophy of
symmetrical moral valuation of sentient entities, used the philosophy to
take cognitive potshots at all the emotions that didn't look
sentient-symmetrical, and it worked. How is this different from a
Friendly AI maintaining Friendship in the face of any
sentient-asymmetrical emergent forces that may pop up?
> I do think future AI's will probably be nice to humans. You've convinced me
> they're unlikely to be mean, and
> unlikely to be really nuts. Very good arguments in this regard, you have.
Yes, but that's just the *baseline*. Creating friendship is a different
> But I still think that AI's will probably evolve to a point where
> they're kinda bored with us. They will then lack the warmth & caring about
> humans that we, as humans, have.
> Which is OK, isn't it?
As a "destiny of the solar system" goes, it's better than nothing - a
community of competing transhuman unFriendly AIs doing most of the
interesting stuff humanity would have done - but I still find it
undesirable if sentient rights get violated along the way. I prefer to
think of humans-turned-transhuman and their boon-drinking-companion AIs
doing all the interesting things Out There within the embrace/OS/API of a
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT