Infinite computing

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Apr 24 2001 - 18:14:05 MDT


By request, I'm posting a summary of the known proposals for achieving
infinite computing power. Since it's easy to demonstrate that physical
law permits computing elements that operate millions of times faster than
neurons, and this quite suffices for superintelligent, far-transhuman AI,
I no longer have a need to speculate about Moore's Law literally going on
*forever*, with transhuman smartness finding loopholes in any and all
"physical limits". Which doesn't mean that I think infinite computing
power is implausible; just that I got sick of hearing catcalls about it.

--
The most famous way of achieving infinite computing power is, of course,
the Omega Point proposed by Tipler; as temperatures rise ever faster
during the Big Crunch, the asympotically increasing energy densities
permit the performance of an asymptotically increasing number of
computational operations, such that an infinite number of computations is
performed before the Universe ends.  Unfortunately, this requires waiting
until the end of our Universe, which now appears to be open rather than
closed anyway.
If you can perform infinite computation during a Big Crunch, you can
probably also perform infinite computations during a Big Bang.  Thus, one
proposal for infinite computing power involves pinching off a section of
spacetime from our own Universe and creating a new Universe, with an
accompanying Big Bang.  When this new Universe began to cool off, perhaps
after 1e-43 seconds (post-Planck-time), another Universe could be created
and so on ("Alpha Line" computing).
Less ambitiously, the "Linde Scenario" would involve opening up a series
of basement Universes connected to our own via wormholes.  "Each new
universe could be the parent of many new universes, so that the whole
population would grow exponentially, the gradual entropic degradation of
old universes playing only a negligible role in slowing down the
process."  (Nick Bostrom.)  This does not achieve actual infinite
computing speeds at any given point, but it does permit life and growth to
continue indefinitely, and the performance of an unboundedly large number
of computations as time goes on.  Which is all we really care about,
right?
Linde Scenario:
  http://www.aleph.se/Trans/Global/Omega/linde.html
Our Solar System contains a limited amount of mass, and Conservation of
Mass and Energy says that we can't just make more.  However, the laws of
physics contain no statement asserting Conservation of Material.  If
negative energy can be manufactured, then positive matter and negative
matter could be produced in paired amounts - in theory, in indefinite
quantities.  Furthermore, because the total mass would be zero,
interlacing negative and positive matter would permit the construction of
arbitrarily large dense megastructures without those megastructures
collapsing into black holes.  Thus, rather than life running into hard
limits when all the matter in our Solar System is consumed, growth could
continue indefinitely.  Since negative energy would also permit FTL, time
travel, wormholes, and the violation of the second law of thermodynamics,
many people postulate that Cosmic Censorship prevents the manufacture of
negative energy.  (Frankly, I think this is a rather warped way of
reasoning about the laws of physics; the only way to find out whether
negative energy can be manufactured is to try it.  When did it start
becoming permissible to reason from a-priori philosophical constraints
instead of experiment?  Oh, never mind.)
As long as you're constructing arbitrarily large computers, why construct
them from mere molecules, which have a maximum theoretical switching speed
of 1e15 hertz before the energies used tear them apart?  Neutronium, being
far denser, permits much faster computing speeds from a given amount of
mass, with a maximum switching speed of 1e21 hertz.  An even denser
material is Higgsium, produced using the negative Higgsino at the center
of the nucleus, and orbiting protons serving the function now served by
electrons.  Higgsium is 1e18 times denser than water; a thimbleful weighs
as much as a mountain.  Monopolium uses a light monopole of one polarity
(North) bound to a heavy monopole of the opposite polarity (South); the
density is 1e25 times that of water, and a thimbleful weighs as much as
the Moon.  (Hence the need to use interlaced positive and negative
monopolium structures, to prevent the collapse into a black hole of any
reasonably-sized structures.)
Neutronium, Higgsium, and monopolium:
  http://www.aeiveos.com/~bradbury/Authors/Computing/Moravec-H/HDPSF.html
Of course, if you keep on manufacturing more and more zero-mass
"interlaced matter", you eventually run out of *space* in your Solar
System.  I believe that I was the first one to propose solving this
problem using Van Den Broeck's "micro-warp" adaptation to Alcubierre's
warp drive - also known as the "tardis warp" or "warp bubble".  Van Den
Broeck found a solution to the General Relativity equations which permits
a large space, say 100 meters in diameter, to be connected to the rest of
our Universe through a tiny bottleneck, much smaller than an atomic
diameter.  Thus, you can pack a very large number of Van Den Broeck
bubbles into a volume the size of our Solar System.  Furthermore, as far
as I know, there's no theoretical reason why you can't open up one Van Den
Broeck bubble inside another one, which would permit total living space to
keep growing exponentially forever.  You'd probably want to use a wormhole
network to keep all the bubbles in communication.
In short, this is a design for a galaxy-sized computer built of pure
monopolium that fits inside your pocket and weighs as much as a Kleenex. 
As far as I know, I was the first to propose "fractal tardis computing" as
a means of achieving indefinite exponential growth.
Alcubierre warp drive:
  http://xxx.lanl.gov/abs/gr-qc/0009013
Van Den Broeck tardis pocket ("micro-warp"):
  http://xxx.lanl.gov/abs/gr-qc/9905084
(Googling will uncover plenty of less technical explanations.)
Finally, of course, there's the idea of using a closed timelike curve to
send the result of a computation back to before the computation started,
permitting an infinite number of iterations to be performed in what looks
to the outside Universe like a finite amount of time.  Of course, this
only works if you can construct a closed timelike curve, which IIRC was
proved to require negative energy.  Cool stuff, negative energy.  (Ha ha
ha!  Sorry.)
I think someone also claimed infinite computing power using black holes,
but I haven't heard any specifics on that one.
--              --              --              --              -- 
Eliezer S. Yudkowsky                          http://intelligence.org/ 
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT