Game theoretic concerns and the singularity (was RE: Are we Gods yet?)

From: lurskalot (lurksalot@netgods.net)
Date: Wed Jul 31 2002 - 14:45:21 MDT


On Wed, 2002-07-31 at 11:47, Ben Goertzel wrote:

> I mean, let's face it, much of human life is still devoted to food, sex,
> violence, status and other Stone Age ish things. Whereas the basic
> interests and motivations of transhumans may be entirely different...

Yes, but no matter how transhuman an entity is it will still have to
acquire energy to run itself (food) and it will likely reproduce (sex).
I'd be more amenable to an argument that violence or status could be
eliminated from post-singularity civilization, although it seems likely
to me that both of those should survive it just fine. Calling it "Stone
Age" may make it easier to dismiss, but it doesn't alter the fact that
what you are referring to are fundamental processes that *any*
intelligence will be required to deal with.

This brings up a point that I've had much difficulty with while reading
the writings of various members of this list. What justification is
there for the belief that game theoretic concerns will not apply post
singularity? (I'm sure I've read Eliezer say exactly that, but I can't
find the reference anymore, perhaps it was in CaTAI?) To be blunt,
I don't buy the notion that basic motivations are going to change as a
function of intelligence because I don't buy the idea that sufficient
intelligence suspends game theory. It may be possible to devise some
pathological Singularity trajectories that result in a suspension of
game theoretic interactions, but it isn't likely.

For game theoretic concerns to apply to a scenario there are three
criteria that must be met. There must be a a finite supply of
resources. There must be competing agents. Demand for resources among
agents must exceed supply. Disrupting any one of these will be
sufficient to render analysis using present tools and concepts
impossible.

The first point is non-debatable at this point. Barring radical new
physics, there will always be a finite supply of energy available for
computation. If new physics emerge that allow an infinite amount of
energy to be extracted from a finite amount of matter, well, none of our
speculations about anything mean anything anyway.

The third point, also, can be easily dismissed. Whether there are
competing agents or not it is reasonable to believe that a Singularity
will consume the maximum amount of energy feasibly possible and that it
would be able to consume beyond that were more to become available.

This leaves the second point, which appears to provide the only
mechanism capable of disrupting our ability to perform analysis of
(extrapolation to?) post-singularity civilization. If there is only one
intelligence, then there can't very well be competition between agents.
But without competition, game theory falls apart. The question, then,
is whether it is more likely that there will be a single super
intelligence than multiple.

It seems overwhelmingly likely to me that there will be multiple super
intelligences. For there to be only one SI requires such a fast takeoff
time that it strongly implies that there will be no additional "hard"
problems in intelligence after we get just past human level. It
requires that there be no problems as hard as human-level AI itself, no
places where the blossoming process gets stuck. Because if it does get
stuck, anywhere, there will soon be other SIs (it is easier to solve a
problem once you know it has been solved somewhere else, even if you
don't know exactly how it was solved). As soon as there are other SIs,
game theory comes back and, with it, our ability to say useful things
about possible civilizational structures in the post-singularity world.

daniel

-- 
Clarke's First Law:
When a distinguished but elderly scientist states that something is
possible he is almost certainly right. When he states that something
is impossible, he is very probably wrong.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT