Intelligence is exploitative (RE: Zen singularity)

From: Joseph W. Foley (fole0091@umn.edu)
Date: Sun Feb 22 2004 - 22:16:23 MST


Greetings.

entropy@farviolet.com said:
> However I'm talking about the distant (or even not so distant) future.
> Already the human race has reached a point where we have decided to
limit
> our expansion.

...

> It seems quite possible that a 'higher' being might be even more so
this
> way.

mike99 said:
> Maybe. Or maybe not. I haven't heard any good reasons why this would
be so.

...

> Obviously, there are many questions on this topic that we are simply
not
> capable of answering at this time. It would seem merely prudent, then,
to
> minimize the number of assumptions we make about what other, smarter
> beings might want to do, or be capable of doing. All we can be sure of
is > what the most intelligent species on our planet has done and is
doing.
> Extrapolating the behavior of this species into the future would seem
to
> be less problematic than assuming that a still higher intelligence
would
> automatically behave in a radically different fashion.

Indeed, we must be careful not to assume that a singularly intelligent
being would be singularly ethical (by standards that vary among members
of our species anyway). Yes, its ethos or goal-system could probably be
influenced somewhat by its creators, for Friendliness or just about
anything else. However, as Darwin realized about biology, all we can
infer from the existence of creatures is that those creatures must be
really good at existing, because most conceivable alternatives lost the
struggle for survival (and most of those before the struggle even
started).

So, what can we infer about a hypothetical Higher being, if we assume
its existence? Well, it may have always been a seamlessly integrated
part of the universe, or that it may have arisen from the
biological/cybernetic evolution of Lesser beings. If the former, we
can't infer very much about the being at all; if the latter, then we can
infer that it was somehow better able to rise to power than any of its
contemporaries. The ethos that leads most directly to that status seems
to me what many of us would consider the *least* agreeable of all: it
would have to be adept at assimilating or destroying all external
competition. No goal-based decision-maker is without some kind of
competition for existence, though this doesn't always mean the
competition of other goal-based decision-makers.

entropy@farviolet.com said:
> Just as we feel we don't have the right to remake all of nature
according
to our rules,...

But the opposite feeling is exactly what we expect out of any
intentional agent that succeeds at existing. It's impossible for a
rule-making entity to exist if it doesn't do so by remaking part of
nature (remaking all of nature is an issue of feasibility, not rights).
Of course, it seems obvious that a successful goal-based existence
machine will remake nature into less arbitrary tools than a Zen garden.
This is really the point of worrying about Friendly AI: we assume that a
super-intelligence that comes into being on its own, without our
tampering, will want to exploit us; the best exister we can imagine is
decidedly unFriendly.

So, it might limit its expansion (the assimilation of the external)
temporarily, but only if that investment could actually boost long-run
expansion. For example, humans might decide not to destroy the
environment, but only because in the long run it's actually bad for us,
and not because it's bad for the environment itself.

As mike99 suggested, we can only extrapolate from what we've seen on our
planet. As species become more intelligent, they modify their
environments more. After all, that's what intelligence is: the ability
to modify the environment toward one's goals (see postscript).

Joe Foley
"Ignorance is bliss, but knowledge is power."

P.S.

Compare:

> My current working def'n of "intelligence" is "able to achieve complex
> goals in complex environments" but it may be there are other useful
> characterizations of intelligence that don't involve goals, or useful
> definitions of mind that don't involve "intelligence" ...

- Ben Goertzel, Sun, 22 Feb 2004 23:17:51 -0500

My informal definition adds that modifying the complex environments is
the way by which intelligences are able to achieve goals in them. I am
not aware of a form of intelligence that doesn't involve goals, and I
would like very much to see an example described.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT