Re: continuity of self

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 17 2002 - 11:28:15 MDT


Ben Goertzel wrote:
>
>> Also under the "historical perspective" department, the most
>> important forms of poverty are not monetary poverty but intelligence
>> poverty, lifespan poverty, and the lack of other resources which are
>> currently so hard to obtain that people tend not to think of their
>> absence as "poverty" but simply "the human condition".
>>
>> -- Eliezer S. Yudkowsky http://intelligence.org/
>>
>
> These assessments are highly subjective.
>
> From the point of view of many people in the world, material poverty is
> a much bigger problem than lifespan poverty or intelligence poverty.

Immediately apparent subjective priorities according to hacked-up
chimpanzee priority-assignment hardware are not always the same as
rational priorities given the same goals.

> And I can't help suspecting that you, Eliezer, might consider material
> poverty a little more serious if you were experiencing it! Even given
> your generally nonmaterialistic world-view and personality.

Why is it, Ben, that you chide me for failing to appreciate diversity, yet
you seem to have so much trouble accepting that this one person, Eliezer,
could have an outlook that is really seriously different than your own,
rather than some transient whim? I don't have any trouble appreciating
that others are different from me, even though I may judge those
differences as better or worse. You, on the other hand, seem to have
difficulty believing that there is any difference at all between you and
someone you are immediately talking to, regardless of what theoretical
differences you might claim to believe in or respect.

Suppose that I did tend to focus more on material poverty if I were
experiencing it. That supervention of my wired-in chimpanzee priorities
is not necessarily more correct. I might as well say to some Third
Worlder "You might consider material poverty less serious if you lived
here." For that matter, I could also be tortured until I considered
ending the pain to be the most important thing in the universe. So what?
  What does this have to do with the price of tea in China, or to be more
precise, the Bayesian Probability Theorem? How do any of these things
change the facts? In what way are they "evidence" about the issue at
hand? I run on vulnerable hardware with known flaws, such that there are
certain environmental stimuli that would produce very-high-priority
signals capable of disrupting more rational means of aligning subgoals
with supergoals; environmental stimuli may even result in negative or
positive reinforcement in sufficient amounts to overwrite the current goal
system. Again, so what? That's just a broken Eliezer, not an enlightened
Eliezer.

I regard myself as an imperfect approximation to morality and rationality.
  Note that I do not say "I regard myself as an imperfect approximation to
what I define as morality and rationality", because if that were the
explicit definition, then all kinds of future conditions would count as
"fulfilling" this definition which actually just corrupted the future
Eliezer's definition. It's a fact that my definition is stored on
Eliezer's brainware. The definition nonetheless does not make *explict*
mention of Eliezer. Like, *this* is the map, and *this* is the territory,
and you can't fold up the territory and put in your glove compartment, see?

I regard myself as an imperfect approximation to morality and rationality.
  If you place me under environmental conditions that disrupt rationality,
that makes me a less accurate approximation. If you put my brain through
changes that destroy my definition of the target, I cease to be an
approximation to rationality/ethics/morality in any real sense. Again, so
what? I don't model that set of hypothetical conditions as actually
changing rationality or morality, so why should I care? Forget about what
you could theoretically torture Eliezer into caring about; that's a broken
Eliezer, or even a non-Eliezer, and whatever brainwashed mantras this
hypothetical entity would output has no relevance to the current
functioning Eliezer's attempt to determine what constitutes rationality or
morality, since those definitions, regardless of whether they are *stored*
in Eliezer's memory, make no actual internal *mention* of Eliezer as
either a present or future determinant of morality.

It doesn't matter what you can do to my brain by imposing various
environmental conditions unless that hypothetical scenario provides real
information about rationality or morality. It automatically matters to
*you* because the definition of morality stored in Ben's memory makes
*explicit internal mention* of Ben's subjective opinion as an important
determinant of morality, so if you imagine future conditions that would
change your subjective opinion by direct supervention of chimp brainware,
it looks to you like it's morally relevant. Actually, this is
overcomplicating things; it matters to you because you directly process
the anticipation of subjective pleasure and subjective pain.

Well, I see things differently. It's not because I have different
brainware. It's because I have a different way of thinking deliberatively
about morality. DIFFERENT! Yes, different ways of thinking about
morality than yours do exist! Now your way may be right, and my way may
be wrong - I don't think it'd be antisocial or unfriendly of you to raise
that possibility, and in fact I rather wish you would - but we do think
about morality differently. You've certainly proved yourself capable of
uttering the sentence "Oh, but Eliezer, different ways of thinking about
morality may exist", but it seems to me that you then go on to refuse to
actually model any kind of moral thinking different from yours. I
understand that you think differently about morality than I do. I think
you're wrong, but I accept you're definitely different. I even try to
model the causes and effects involved; yeah, sure, I might be getting it
completely wrong, but at least I'm *trying*. I can tell there's a
difference in what we think "morality" *is*, and I'm trying to understand
it, not dismiss it as a shallow surface disagreement.

> While the human condition in itself is profoundly flawed, there is no
> doubt that some humans live in vastly more flawed conditions than
> others.

"Vastly"? I think that word reflects your different perspective (at least
one of us must be wrong) on the total variance within the human cluster
versus the variance between the entire human cluster and a posthuman
standard of living. I think that the most you could say is that some
humans live in very slightly less flawed conditions than others. Maybe
not even that.

> As a person of great material privilege, you are inclined to
> focus primarily on the limitations and problems we all share.

As a student of minds-in-general, I define humanity by looking at the
features of human psychology and existence that are panhuman and reflect
the accumulated deep pool of complex functional adaptation, rather than
the present surface froth of variations between cultures and individuals.

If I am a person of "great material privilege", by the way, I would very
much like to have my own nanocomputer. What? I can't buy that? And
neither can Bill Gates? Guess we're both poor.

> Of course, I agree with you that creating a superhuman AGI can be a
> great way to end material poverty as well as to overcome the many
> self-defeating characteristics of human nature.

It's a way to rewrite almost every aspect of life as we know it. You can
take all the force of that tremendous impact and try to turn it to pure
light. You can even hypothesize that this tremendous impact, expressed as
pure light, would have effects that include the ending of fleeting
present-day problems like material poverty. But it is unwise in the
extreme to imagine that the Singularity is a tool which can be channeled
into things like "ending material poverty" because some computer
programmer wants that specifically.

--
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence






This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT