RE: supergoal stability

From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 03 2002 - 20:21:14 MDT


I really don't think so, Eliezer.

FIRST:
I think that each of us has a LOT MORE DATA pertinent to the problem how to
please ourselves, than we do pertinent to the problem of how to please
others.

You could argue that a community of AI's could gather data about how to
please each other by constantly monitoring each others' minds. If this were
a major feature of their activities, though, it's not clear in what sense
one would really have a "community of individual minds" as opposed to a
"group mind with semi-autonomous lobes."

Similarly, you could argue that an advanced AI could study our brain
dynamics and structures to help figure out how to please us better. Maybe
so.

SECOND:
Even if the data regarding how to please others is accessible to an AI mind
by some futuristic method, there is a lot MORE data to be analyzed regarding
the "please others" problem than regarding the "please self" problem.

Sometimes more data makes a problem easier, but in this case, I think the
"more data" is bound to kind of chaotic and conflicting, posing a HARDER
data analysis problem....

THIRD:
Pleasing others may be much harder in the future than it is now, because
there may be a much larger diversity of minds out there (what with AI's of
various sorts, genetically modified humans, cyborgs, etc.). Pleasing
everybody all the time will be even less feasible than it is now, it would
seem.

Being Friendly to 10^20 different minds with totally different needs and
constructions, involved in an environment where they sometimes are in
competition, is not gonna be an easy thing at all.

Yeah, you can argue that in the future all resource limitations will be
eliminated, which obviously would make the problem easier. But this may be
overoptimistic!!

-- ben g

> Peter Voss wrote:
> >
> > Ben, you make a very good point. Figuring out what is good for yourself
> > seems much easier than trying to balance the needs/ desires of everyone
> > else.
>
> Isn't this a special case of our being humans rather than ants or AIs?
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT