RE: Game theoretic concerns and the singularity (was RE: Are we Godsyet?)

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 30 2002 - 15:10:17 MDT


hi,

> Yes, but no matter how transhuman an entity is it will still have to
> acquire energy to run itself (food)

It's energy req's may be sufficiently low that for it, obtaining energy is
much like obtaining air is for us

> and it will likely reproduce (sex).

Not necessarily, maybe it will just expand. And sexuality in humans is
about far more than just reproduction.

> I'd be more amenable to an argument that violence or status could be
> eliminated from post-singularity civilization, although it seems likely
> to me that both of those should survive it just fine.

Status-seeking behavior in humans derives directly from "sexual
selection" -- trying to impress potential mates...

If transhumans seek status it will be in a very different sense, i believe

> Calling it "Stone
> Age" may make it easier to dismiss, but it doesn't alter the fact that
> what you are referring to are fundamental processes that *any*
> intelligence will be required to deal with.

I think you're abstracting the 'fundamental processes' I've mentioned, to a
very high degree, in order to arrive at this conclusion.

> For game theoretic concerns to apply to a scenario there are three
> criteria that must be met. There must be a a finite supply of
> resources. There must be competing agents. Demand for resources among
> agents must exceed supply. Disrupting any one of these will be
> sufficient to render analysis using present tools and concepts
> impossible.
>
> The first point is non-debatable at this point. Barring radical new
> physics, there will always be a finite supply of energy available for
> computation. If new physics emerge that allow an infinite amount of
> energy to be extracted from a finite amount of matter, well, none of our
> speculations about anything mean anything anyway.

I disagree completely. There is a finite supply of air, but it doesn't
bother humans, we don't compete over air.

A sufficiently large finite supply of X is essentially the same as an
infinite supply of X, for practical purposes.

> The third point, also, can be easily dismissed. Whether there are
> competing agents or not it is reasonable to believe that a Singularity
> will consume the maximum amount of energy feasibly possible and that it
> would be able to consume beyond that were more to become available.

I guess it's reasonable to believe that, but it's just as reasonable not
to...

Maybe a superintelligent AI will find ways to achieve what it wants with
relatively modest energy. Or as you say, maybe it will find new ways of
generating energy.

> This leaves the second point, which appears to provide the only
> mechanism capable of disrupting our ability to perform analysis of
> (extrapolation to?) post-singularity civilization. If there is only one
> intelligence, then there can't very well be competition between agents.
> But without competition, game theory falls apart. The question, then,
> is whether it is more likely that there will be a single super
> intelligence than multiple.

We have no idea. I'm guessing it will be somewhere inbetween what we think
of as "single" and what we think of as "multiple"

> It seems overwhelmingly likely to me that there will be multiple super
> intelligences. For there to be only one SI requires such a fast takeoff
> time that it strongly implies that there will be no additional "hard"
> problems in intelligence after we get just past human level. It
> requires that there be no problems as hard as human-level AI itself, no
> places where the blossoming process gets stuck. Because if it does get
> stuck, anywhere, there will soon be other SIs (it is easier to solve a
> problem once you know it has been solved somewhere else, even if you
> don't know exactly how it was solved). As soon as there are other SIs,
> game theory comes back and, with it, our ability to say useful things
> about possible civilizational structures in the post-singularity world.

Even if there are many superhuman AI's at first, they may end up merging
together.

I don't think game theory is very useful for analyzing complex human
situations, and I think it will also not be very useful for analyzing AI's
in the future, though perhaps for different reasons.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT