Re: Adaptation brings unFriendliness

From: Philip Goetz (philgoetz@gmail.com)
Date: Tue Dec 05 2006 - 13:59:35 MST


This message didn't make it to SL4 the first time.

On 11/27/06, Christopher Healey <CHealey@unicom-inc.com> wrote:
> Phil,
>
> Are you saying that you don't think that a superintelligence working
> toward *some* goal would derive self-preservation as an important
> instrumental goal? It's hard to move decisively toward any goal if you
> don't exist.

That's a good point.

I don't have a problem with a superintelligence having
self-preservation as a goal. It's Asimov's 3rd Law, for example, and
yet the 1st 2 laws keep the robots firmly under the thumbs of the
humans. The problem is with a superintelligence evolving a drive to
expand the proportion of resources it controls, especially if it acts
in a pre-emptive way, seizing control of resources only in order to
deny them to other intelligences, due to game-theoretic reasoning.

Now that I think about it, it also makes sense that increasing the
resources that you control is a good strategy for accomplishing any
goal. So it might be that even a single superintelligence will become
"greedy". It certainly seems that a pair of intelligences would each
become greedy, from game-theoretic considerations, unless they each
had reasons to believe that the other would not act greedily /
pre-emptively seize resources.

This goes back to my call at the AGIRI conference for work, not just
on trying to make a single AI friendly, but on trying to figure out
what the starting conditions would be, in an AI ecosystem, that would
encourage cooperation rather than preemptive ("greedy") competition.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT