Re: Is a theory of hard take off possible? Re: Investing in FAI research: now vs. later

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 20 2008 - 17:16:57 MST


William Pearson wrote:
>
> If you accept that the rate of improvement of a learning system is
> bounded by the information bandwidth into it, then we can start to put
> bounds on the rates of improvement of different systems based on
> energy usage and hardware (e.g. a PC with two DDR2-800 modules, each
> running at 400 MHz will limit the software running on it to improving
> itself to 12.8 GB/s, it's memory bandwidth, or less if you just count
> the connection to the web and keyboard/mouse).

The bound is real but INSANELY HIGH - it's like saying, "Let's begin
by accepting that computational processing has to obey the laws of
physics." There are physical bounds, if we assume our laws are
correct; but they're insanely far above the world we know, billions of
trillions of quintillions times better than modern technology or
biology, using the power output of just one star.

Similarly, there are bounds on what you can deduce about the outside
universe, and how much you can manipulate it, based on bits coming in
or going out - but the bounds are based on Solomonoff induction and
the second law of thermodynamics. Meaning that so far as principle
goes, you can use sensory and manipulative bandwith orders and orders
and orders of magnitude more efficiently than humans do.

And worse, unlike the case of setting implied physical bounds on
processing power, there's no practical way for us humans to compute
what the real bounds are. We have no idea how much an ideal
Solomonoff inductor could deduce by reading a single copy of
Shakespeare's Hamlet, because we can't run all possible computations
to determine how much remaining internal variance there is within
simple computations that produce a copy of the SI reading a copy of
Hamlet.

What it ends up working out to is, "A superintelligence can deduce an
unknown amount about humanity from reading a single book, and we have
no way of guessing how much." Likewise, "A superintelligence can
shape its outside world an unknown amount by emitting a 100-character
text message, and we have no way of guessing how much, because we
can't examine all possible 100-character text messages and their
consequences."

And don't forget that not all interactions with the universe carry the
neat label "interaction" - can the SI look at its own source code?
Are *you* going to look at its source code? That's an out-of-band
sense impression and manipulation right there.

See also the demonstrated ability of a reasonably smart intelligence
to shape humans in ways they claimed to be impossible, through
bandwidth limited to a few baud in a text-only IRC channel, aka "The
AI-Box Experiment".

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT