Re: Complexity tells us to maybe not fear UFAI

From: Mikko Särelä (msarela@cc.hut.fi)
Date: Thu Aug 25 2005 - 04:33:32 MDT


On Thu, 25 Aug 2005, Chris Paget wrote:
> Phil Goetz wrote:
> > The fear of UFAIs is based on the idea that they'll be able to
> > outthink us, and to do so quickly.
> >
> > "More intelligent" thinking is gotten by adding another layer of
> > abstraction onto a representational system, which causes the
> > computational tractability of reasoning to increase in a manner that
> > is exponential in the number of things being reasoned about. Or, by
> > adding more knowledge, which has the same effect on tractability.
> >
> > By limiting the computational power available to an AI to be one or
> > two orders of magnitude less than that available to a human, we can
> > guarantee that it won't outthink us - or, if it does, it will do so
> > very, very slowly.
>
> You're assuming that the human brain is operating at more than 1% of its
> theoretical computational power here (and I'd be interested to see how
> you plan to calculate or prove that). It is at least possible that the
> AI will be able to self-optimise to such a degree that it could function
> effectively within any computational limits.

And you are assuming that many of the problems the AGI needs to solve have
computationally tractable solutions. This makes the problem P=NP? highly
relevant to such hypothetical situation.

We know there are problems that are exp-hard, but they are relatively rare
and most of the interesting problems are not in that category. On the
other hand, a lot of interesting problems _are_ NP-hard to solve.

If P=NP and the AGI is the first to discover this, then he will be able to
do things a lot faster than otherwise would be expected. Also if the truly
interesting problems have good polynomial (or rather linear, or sublinear)
approximation algorithms, then taking away computational power does not
really help that much.

In this case, the interesting problems would likely be firstly about
AGI enhancing itself and its interpretation and decision making
algorithms (Note to readers, the terms used in this sentence are not
exact but descriptive). Now the question is how hard are these problems
computationally? The AGI will not have a philosopher's stone to calculate
things faster than things can be calculated.

Thus, I would add to the previous groups and say that computational
complexity theory is of importance to the AGI development.

Final note, I am not speaking for AGI-boxing, nor do I consider it a good
strategy.

Then going to another topic I've been thinking for a while. If I've
understood correctly, one of the reasons a spike, a singularity, is
predicted soon after the development of AGI is that it could device itself
a better hardware in consecutive cycles and thus each time halve the time
it takes to develope the next generation.

I would like to counter argue against this proposition. (Note that I am
not arguing against other possible reasons for singularity after the
appearance of first AGI, but just this one). The whole proposition assumes
that developing next generation hardware is computationally as complex as
was developing next generation hardware. Or that at least the complexity
does not go up fast.

If we assume that the computational complexity of each generation hardware
increases exponentially, then the speed of takeoff depends on the relative
rates of growth. If the rates were exactly the same, then exponential rate
of growth would follow.

Why should we believe that designing next generation hardware might be
computationally harder?

For the past decades we have lived with approximately exponential growth
of doubling computational capacity of a chip each two years. At the same
time, the computational effort we have put into generating each next
generation has also grown exponentially in two ways. Firstly, we spend
more computer time designing the next generation chip, and secondly, we
spend much, much more brainpower to solve the problems each new chip
generation brings. As there are several problem fields in computer
hardware design that can be run parallelly, lots of humans working on the
problems does not seem like a solution that looses much to the overhead.

Thus, I believe there are reasons to believe that an AGI will not
accelerate its rate of computing power increase on the magnitude of 2 with
its each design cycle.

-- 
Mikko Särelä
    "I find that good security people are D&D players"
        - Bruce Schneier


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT