From: Filipe Sobreira (warlordbcm1@yahoo.com.br)
Date: Mon Feb 09 2009 - 17:43:34 MST
Well, since English is not my native language, and typing anything really meaningful takes LOTS of time for me, I'll just paraphrase someone whose opinion is exactly the same as mine. He managed to express incredibly well his viewpoint on this subject, and since we think the same, I've decided that I could be more successful using his terms than my own. His name is David Jackson, and he usually posts on another list I lurk. So here we go:
________________________________
Here's more or less where my notions of what constitutes self are coming from:
http://en.wikipedia.org/wiki/Self_(philosophy)
It seems that lot of this discussion drifted away from the original topic. A lot of physics arguments were used to prove that two atoms of the same kind are equal and interchangeable. Thats for sure, and no one could tell the difference between them. Atoms have no labels, nor souls, as John mentioned before. But that was not what I was talking about in first place. I've started to talking about atoms to prove that even two equivalent atoms are not one and the same. If I have two hydrogen atoms (lets call them...Bob and Bob), put them into a BE condensate and then heat them back, there will still be TWO atoms. They do not disappear, and despite they share now all measurable information, despite they being absolutely interchangeable, this does not change the fact that they are still there, absolutely identical, but two nonetheless (this is easily measurable by verifying their masses). Now, what does this have to do with the personal identity discussion? My
point was that if even two atoms of the same kind are no ONE, why two people with the same mindstate would be? But that seems to be something that we all agree: Two things are two. Before the discussions drifted away to something unrelated, I was going to say that we seemed to agree on one basic stuff. The aforementioned 'two are two' thing. The problem lies when someone says something like: "Your copy is you, in every meaningful way"
When you say "my copy is me" my immediate question is "relative
to whom?" I don't know that the statement makes sense in the absence
of an observing perspective. SOMEBODY has to make that statement ... If you presume an external observer, then, yes, it's obvious
that two instances of the same mindstate are informationally
equivalent. But what happens when the comparing observer IS one of
those mindstates? Does he find himself to be equivalent to his copy?
How can he, when his evaluation of himself includes his perception of
self (be that merely information, a soul, or whatever) while his
evaluation of his copy does not?
I'm not just talking about the "information interface" the
rest of the world sees. This is what I've called third person perpective before. I'm talking about the subjective experience of
what it feels like to be me, as opposed to how it might feel like to be
someone else. Bear with the language analysis coming up; it's not
that I don't think you know English, it's just that I think it's
important, in this case, to make sure we're all on the same page.
I use the subjective pronouns "I", "me", etc to denote that
"self" in my writing. So when I say that "I am writing", I mean that
the subjective experience of being me includes typing this paragraph.
That renders it distinct from a reference to any other writing critter
whose experience is not that of being me.
When you say "My copy is me" there is an implicit "I believe"
tagged onto the beginning of that sentence. To me, that denotes a
subjective experience including the belief that it includes the
subjective experience of some entity labeled "my copy." This seems
circular to me, and is probably not what you actually mean when you say something like "I am a pattern, and everything with the same pattern is me". To me, this denotes something like a distributed mind, which is unrelated to this topic.
If your subjective experience does NOT include the subjective
experience of your copy, then I don't see how it's possible to say that
the condition of the copy's subjective experience means anything in
terms of your own. If your objective in creating that copy is to
attain personal immortality (
i.e. to make your subjective experience survive indefinitely) then I
don't see how spawning another entity who happens to operate with the
same information in your brain approaches that goal. Its existence is
comforting to others, who can only evaluate you from an external
perspective. Their evaluation may result in an arbitrary degree of
equivalence. But their evaluation of the two of you is NOT the same as
the analogous evaluation either one of you by the other. They MUST
evaluate themselves to be separate, independent beings unless they
share a subjective experience of being. (Or unless they're crazy....)
You could say that
as long as your pattern is retained (and a pattern is a type of
information) than the 'whole' is retained, even if the pattern is not
actively being expressed at some point or changes substrates (uploaded). Okay
... but this is the perspective of an outside observer. What is the
perspective of the pattern being copied? Should it expect its
perspective to shift should someone copy it bit-for-bit onto an SD
card, take that card to Mars and load it into a functionally equivalent
substrate? I don't see why.
So, faced with its own extinction and wishing to avoid it, is creating such a copy a viable strategy for survival? Advocates of the Pattern Theory of Identity claim that "A process that takes you apart and destroys/disrupts that
informational pattern that is you would certainly kill you. A process
that retained the pattern would retain you as it were", but again, from the perspective of an external observer, sure. Perhaps
even from the perspective of the copy, once it is instantiated. But
what about from the perspective of the critter being so disassembled?
Should it presume that, once it loses consciousness, it will not gain
it back?
Perhaps it's worthwhile to consider what, if any, minimally
functional state exists for a working mindstate. At what point does
removing portions of a mindstate render it inoperable or "dead?" If I
am being copied by progressive disassembly and duplication, then at
some point I would presume my mindstate will lose enough of itself that
it will cease to function. That point, I'd think, would be a good
point to label "dead." Depending on where this cutoff point exists,
there are a few possible implications for a progressive
disassembly/copy scenario:
1) The "donor" mindstate reaches its nonfunctional state
BEFORE the copied mindstate becomes viable. In this case, the donor
mindstate experiences extinction before its copy can be brought
on-line. Unless there's some means of "floating" the donor's sense of
self during the dead period, I don't see how it could end up in the
copy. So a donor going into such a situation should probably not
expect to regain consciousness once the copy is complete.
Therefore, the copy is not a viable route to continued existence.
2)
The donor mindstate reaches its nonfunctional state AFTER the copied
mindstate becomes viable. In this case, the donor can carry out the
"poke test" to verify that its copy constitutes a separate, independent
mindstate. Since it doesn't share a sense of self with the copy NOW,
why should it expect to share one later? It should probably assume
that, if the disassembly process continues, it will lose consciousness
and never wake up.
Therefore, the copy is not a viable route to continued existence.
3)
Actually, this is a variant on 1 ... but it's kind of an edge case, so
it may deserve special treatment. Lets say that the cutoff point is
when 1/2 of the mindstate is removed. So the donor loses functionality
at the same point the copy becomes viable. Physics, however, prevents
the off-line/on-line events from being truly simultaneous. SOME amount
of information must be removed from the donor to be duplicated and
placed in the copy. The transfer of this information cannot take place
faster than the speed of light. So at best, the donor will become
nonfunctional a fleeting instant BEFORE the copy becomes viable. This,
then, becomes situation #1.
So what if you could "freeze" a brain in time, stopping all
activity for an arbitrarily period of time before restarting it. What
would be the effect of this on that brain's self? Personally ... I
have no idea. My feeling is that it wouldn't do anything -- the "self"
would pick up exactly where it left off, perceiving no gap in its own
existence.Is inactive the same as dead? You could argue that its like resting, but
"resting" is still an active state. Stuff is still going on. I'm not
convinced that "self" requires consciousness. In fact, I'm pretty sure
"self" doesn't rely on consciousness, since it survives my snoozing
daily. I don't know where the cutoff point is where the activity of a
brain/whatever becomes insufficient to support a "self," but I'm pretty
sure it exists somewhere between "sleeping" and "dead."
So what if you freeze a mindstate, copy it, destroy the
original and reinstate the copy? Will the "self" of the original
mindstate find itself in the copy? Why should it? And if it
shouldn't, should the original mindstate expect to regain consciousness
when the process is complete?
Moreover, since I can conceive of a situation in which
the data patterns of my mind are stored to SD card while the original
"me" persists, and since I can prove by poking that the SD card is not
me, I'd have to conclude that there's nothing special about the
circumstance where the original me doesn't exist simultaneously with
the card.
Pattern Theory of Identity advocates tend to say that there is nothing special in the nature of self that leads them to conclude that one self is as good as another. But isn't the "nothing special nature" a
false premise? Observers are not atoms. Your perception of yourself IS special, in that it's
unique in your perception of the world. You do not perceive any "self"
other than your own, so by definition your experience of self is
unique.
(At least, I assume you do, since I'm not a solipsist. But, for all I
know, you really don't have a subjective experience of self and I'm
just arguing with a black box, as Matt suggested as being possible)
If I have nine red balls and one green
one sitting in a line, perhaps you could make an argument that the
physical arrangement of the balls is inconsequential -- there's nothing
special about having the green ball on one end of the line as opposed
to, say, the middle. But there IS something special about the green
ball, in that it's green while all the others are red.
Saying that two informationally equivalent people are identical in every way that matters, is nonsense, because you arbitrarily ignored his own first person perspective. To
an external observer, yes they are identical. To one of the instances in question, I
think it's pretty clear that every copy is not identical. Suppose I could be nondestructively copied. To me,
looking at my copies, I perceive myself to be "green" because I am
experiencing being me, whereas they are "red" because I am perceiving
them as other. All copies are NOT equivalent from the perspectives of
one another. They have a unique sense of self,
just as I do. They have the same name, the same memories, the same
genetic makeup -- the same sense of identity as it relates to the rest
of the world -- but they do not have the same experience of being who
they are. Hence they are not sufficiently equivalent (to one another)
to justify calling them the same person.
You might say however that one copy is just as good as another. I would
agree -- the unique experience of any particular copy doesn't mean
anyone ELSE is not justified in assuming multiple copies are the same
person when they are, by all outward measure, equivalent. And this has
the potential to lead to all sorts
of very dangerous and unpleasant things, since it VASTLY devalues the
individual. If one copy is just as good as another, then what reason would I have to value the
existence of any particular instance if equivalent copies exist? Why
bother installing safety devices in vehicles, for instance? If
somebody dies in a car accident, we can just instantiate a backup copy
and everybody's happy. But why stop there? Perhaps, every year, as a kind of "research
tax," every citizen is required to donate a copy of themselves to
scientific research. Since we aren't worried about impacting the
individual's health, we can conduct any sort of experiments we want on
his copies. If copies are cheap, we need not even take any special means to sustain
them. As long as they survive long enough for us to do what we set out
to do, we're good.
Say
we're worried about how our kids are going to turn out? No problem.
Make a backup of them when you're reasonably happy with their behavior
and just kill the little bastards when they start to get unacceptably
rowdy. Then try something else with the backup to try and steer it
onto a more desirable developmental course. No reason why this
approach need by restricted to children -- governments could do the
same thing to their citizens if they become insufficiently loyal.
That's abhorrent, you might say -- and I'd agree. But I think that
a
morality proscribing such things would be entirely arbitrary in a
society that perceived backups/uploads as you and Matt describe.
If people were as casual about death as they were about
birthdays (i.e. grandma's reinstatement party) why would they go to the
lengths we go to today to prevent it? From our modern perspective,
isn't it likely that such a society would seem exceptionally brutal?
If people have no reason to
value their own personal existences ... what logical reason do they
have to respect the existence of others? Economics, maybe. But, when
you get right down to it, it's a thin basis for a moral code....
But back to my point. Lets say, for the sake of argument, that information is
sufficient to establish identity. As long as two things are
informationally equivalent, neglecting such "trivial" aspects as
location in space and physical composition, perhaps we can identify
them as a single entity.
Now take a single-celled organism. Call it Zell. Zell can be
represented by a given pattern of information, as encoded by its genes,
the structure of its organelles, etc. Say Zell divides perfectly, with
no mutation, to produce two identical copies of itself. How many cells
do we then have? Does it make sense to say that there is really only
one cell, because they encode the same information? Does it make sense
to say that when we kill one cell, that cell is dead and the existence
of the other in no way affects the fact of its nonexistence?
Should we bother giving the other cell its own name, or should we refer
to them both simply as "Zell?"
Why should we think about minds any differently from the way we
think about cells, if they are both just patterns of information?
Enough,
Filipe Sobreira
"Adtollite portas principes vestras
Et elevamini portae aeternali
Et introibit rex gloriae.
Quis est iste rex gloriae?"
(Psalm 23(24):7–8a)
________________________________
________________________________
De: Matt Mahoney <matmahoney@yahoo.com>
Para: sl4@sl4.org
Enviadas: Segunda-feira, 9 de Fevereiro de 2009 12:55:08
Assunto: Re: [sl4] Personal identity (was Re: A teleporter)
--- On Sun, 2/8/09, Filipe Sobreira <warlordbcm1@yahoo.com.br> wrote:
> This whole topic is very interesting, but I find it hard to
> discuss without knowing what the others believe in some of
> the basic principles, so this is a question to the group as
> a whole: In what consists your theory of personal identity?
> What something needs to have to be called 'you'?
Animals and children that have no concept of death have nevertheless evolved to fear most of the things that can kill them. Death seems to be a well defined (learned) concept until you introduce ideas like AI, uploading, copying, teleportation, and the brain as a computer which can be programmed.
You don't know whether the universe is real or whether all of your sensory inputs are simulated by a computer that exists in a world you know nothing about. You don't know whether your lifetime of memories were programmed in the last instant by a computer in a world where time is a meaningless, abstract concept. If you were destroyed and replaced with an exact copy every second, each new copy would be unaware of it. If your memories were erased and replaced with that of a different person, the new person would be unaware of it. Exactly what is it that you fear?
The correct question is not what should you do, but what will you do? The way our brains are programmed, we will probably treat machines that look and act like people as people and give them legal and property rights. We will see our friends die and then appear to be brought back to life as machines, and want to do likewise. The implications are much easier to work out. We don't need to get hung up on meaningless discussions about the identities of atoms trying to define "you".
-- Matt Mahoney, matmahoney@yahoo.com
Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT