From: Philip Goetz (philgoetz@gmail.com)
Date: Fri Jan 27 2006 - 07:55:49 MST
On 1/26/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>
> 1: A finite computer has only a finite number of possible states. In
> the long run, then, you *must* either die (not do any processing past a
> finite number of operations), go into an infinite loop (also implying no
> further processing), or grow beyond any finite bound. Those are your
> only three options, no matter the physics. Being human forever isn't on
> the list. That is not a moral judgment, it's an unarguable mathematical
> fact. In the long run - the really long run - humanity isn't an option.
I agree roughly with your conclusions, but this reasoning is like
saying that the universe is boring because it has a finite # of
states. Simplify the brain's state as being the collection of on/off
states of its 10^(10 or 11, let's say 11) neurons. Then it has
2^(10^11) = 10^(log(2)*10^11) states. Compare that to the age of the
universe in tenths of a second (about 10^18.5).
If you can bring the switching time of your brain down to 10^-44
seconds, instead of a tenth of a second, you will have to add 143
neurons to make up for that change.
I suspect that if you worked it out, you would come to the heat death
of the universe before a human brain needed to repeat state. If not,
add another hundred neurons.
I think a stability analysis might be more appropriate. The greatest
risk to a Great Old One is suicide. Equate suicidal thoughts with
instability of some model. It might then turn out that the odds of
falling into a suicidal state increase with the size of the network,
so that the best strategy for living forever is to simplify your
brain, and so that there is a (probabilistic) maximum on the amount of
life that an individual can have! (where "amount of life" is
approximated by the complexity of that organism's brain summed over
time)
You could probably add stability to a model with N neurons by
increasing redundancy, but that would impose a computational cost that
might scale faster than N. In that case, there would definitely be an
upper limit on the expected life total possible for an individual!
(I don' tthink that talking about strange attractors, BTW, solves the
problem Eliezer is talking about - they occur in real-valued state
spaces, while we're talking about a digitized brain model with a
digital state space. (If the brain is analog, of course, then Eli's
argument is a non-starter. But then Eli will come back with some
quantum mechanics, to which I will respond with a similar argument
about heat death of the universe.))
- Phil Goetz
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT