Re: physics of uploading minds

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Fri Oct 28 2005 - 02:37:30 MDT


Phillip Huggan <cdnprodigy@yahoo.com> wrote:
> I don't *think* the universe functions as anything but an ordered series of
> snapshots, but I know reality at my consciousness's level of resolution is
> continuous somehow. Descartes's reasoning works here in a modified form: I
> think therefore I am, as the thought requires a non-zero amount of time to
> exist in my mind.

Working out what the real minimal physical implications of the existence
of subjective experience are is a tricky prospect. My own best guess would
be that all you can say is that some set of elements exists somewhere (at
least once) as a causal chain that implements the functional characteristics
of the cognition you're reflecting on. Though the issue is so complicated
I'm not sure that any one-sentence summary can contain much useful
description.

> I see no difference between flash-uploading and a Moravec transfer; both
> are impossible unless the ability to harness both types of Singularities
> are assumed.

The accuracy of flash-uploading is bounded by the known laws of physics.
Moravec transfers aren't subject to the question of accuracy in the same
way; we can't even compare the similarity of the upload's future history
with the future history the original brain would've had in abstract,
because quantum-level uncertainty makes at least the latter very fuzzy.
I don't see why you're writing Moravec transfers off as impossible though;
it would take a lot of knowledge and technology, which /probably/ won't
be developed without triggering a Singularity anyway, but I can't see a
fundamental reason why you can't do this before a Singularity occurs.

> A Penrose refutation would involve stating that synapses are affected by
> quantum forces, so *chaotic* effects would affect the likelyhood of whether
> a path of branching synapses turns into a thought. Accurately computer
> modelling every single molecule in the human brain would be very difficult
> even with mature MNT.

Generally we assume that the brain's behaviour can be strongly predicted
by a theory operating at a higher level than individual molecules. It's
true that we don't have this theory yet, but it looks highly unlikely
that processing in the brain is /that/ parallel and mass-efficient. For
Moravec transfers we can replace each neuron with a custom build
nanomachine to do the same job, we don't need to use general computation
and conventional programming (though that does make some other things
you might want to do next easier).

> Multiplied across all neurons and synapses, personal identity might be
> lost within a second of turning off the biological neurons and turning
> on the robo-neurons. Alternatively, personal identity would be lost
> incrementally as each facet of our mind's biological counterpart was
> replaced incrementally with robo-neuron derived mental processes.

What is this 'personal identity' of which you speak? In physical terms?
Certainly the upload isn't going to loose any objectively measureable
cognitive content, including their self-model, as if they did it wouldn't
be a successful upload in the first place.

> Assume two perfectly identical sheets of paper. If I write on one, is
> the other immediately stained as well? No. Why not? Each sheet's
> gravity independently warps space-time. Each sheet exerts, consists of,
> and is subject to a whole host of other forces and fields. This is the
> reasoning that is lacking in a the simple thought experiment contrasting
> robot neurons and bio-neurons.

I have no idea why this is meant to be relevant. The future state of the
paper doesn't depend on the existing state of the paper; it depends on
the existing state plus external influences. If the external influences
differ, the states will diverge, but this is true for /any/ system
including humans and uploads. The (narrow) question we are interested in
is 'will the upload and the human stay converged over time assuming no or
identical external influences (i.e. sensory input)'. The more general
question is 'why do we care if they diverge anyway, as long as they meet
a personally acceptable standard of self-similarity?'.

> Incrementally switching the neurons only muddies the waters. If you take
> away a few of my neurons by hitting me in the head with a hockey puck, it
> is very unlikely to affect my identity or future actions. But keep taking
> away more and more neurons...

Any kind of interaction with the world may affect your identity or future
actions, which are indeterminate anyway. Following this line of reasoning
to its logical conclusion would result in a total denial of personal
change, which sounds pretty silly to me.

> As I said, I believe the process referred to as uploading can create
> conscious entities. Just don't look here for immortality. The difference
> is critical in how an AGI treats us. An AGI that kills us off and replaces
> us with conscious agents exhibiting a slightly improved standard-of-living
> is UFAI. Whereas an AGI that keeps us around and instead uses rocks to
> make slightly improved conscious agents, is a lot better than twice as good
> as the first AGI in my books.

I agree that uploading people involuntarily looks like a bad idea from
here, but I wouldn't go so far as to say that it's definitely a bad idea.
Perhaps all of these doubts can easily be shown to be silly superstition
somehow. I think it's possible that entities vastly more intelligent than
us might validly conclude that we should be immediately uploaded for our
own good, and if the reasoning was truly well-founded I wouldn't object.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:18 MST