Fundamentals - was RE: Visualizing muddled volitions

From: Brent Thomas (bthomas@avatar-intl.com)
Date: Wed Jun 16 2004 - 08:55:15 MDT


Again I'd like to express the hope that any F(AI) developers would
build into their systems (as a fundamental invariant?) the 'right of
withdrawal'

This should not be part of a 'bill of rights' as it is so fundamental to
having an acceptable
process that it should be a basic condition. No matter what the
collective thinks is best,
even if it has (correctly!) extrapolated my wishes or the wishes of the
collective, it should
still not apply that solution to my (or any sentients) physical being
without my express approval.

Change the environment, alter the systems, create the transcendent
utopia but do it with 'choice' and
as such do not modify my personality or physical being (and as part of
that be prepared to create 'enclaves' for
those who wish to remain unmodified) without the express consent of the
sentient to be modified.

Do this and I think the vision of the coming singularity will be more
palatable for all humanity. (and besides I can't really object about
modifications if I was consulted now can I?)

Do not tell me that 'oops we got it wrong...' as indicated here:

>>The reason may be, "That idiot Eliezer screwed up the extrapolation
>>dynamic." If so, you got me, there's no defense against that. I'll
try
>>not to do it.

Instead (using the principal of no modification to sentients without
express permission) the system
can tell me "Hey, you'd be much happier if you had green hair, we've
done some calculations and if at least
20% of the population had green hair then there would be a 15% reduction
in the general unhappiness quotient...
Can I make this modification to you or would you like a deeper
explanation of the intents and consequences?"

I think I'm mindful that the system is likely to evolve fast, (go foom!
(hopefully in a good way!)) and that
even if it is Friendly and has my best interests at heart I still may
not want to participate in all aspects of the system, even if its
calculations tell it that I would in fact in the future have appreciated
being modified.
I think I do foresee a hard takeoff scenario and as long as the
fundamentals are good then even when no person or group of people is
capable of understanding even a small percent of the operations or
actions of the system as long as they have and retain personal choice
over their own person (and possibly local environment) then things will
be fine.

(I don't particularly care that the system decided it needed to convert
90% of Arizona into a giant radio transmitter - just don't make me into
one of the support beams!)

Brent <== *likely to have green hair if the system says it would help
the singularity, but glad to be consulted*

-----Original Message-----
From: Eliezer Yudkowsky [mailto:sentience@pobox.com]
Sent: Tuesday, June 15, 2004 7:31 PM
To: sl4@sl4.org
Subject: Visualizing muddled volitions

There have been - not surprisingly, and it's a legitimate topic - many
recent questions of the form, "What if my volition isn't what I want?"

I would like to point out that, if your volition is not what you want,
there is a *reason* for it - it is not something that happens because of
a
random-number generator.

The reason may be, "That idiot Eliezer screwed up the extrapolation
dynamic." If so, you got me, there's no defense against that. I'll try

not to do it.

The reason may also be, "My short-range volition is muddled and I
possess a
medium-range volition coherent with that of the rest of humankind."

I'd like to take a moment to discourse on the importance of attaching
concrete meanings to abstractions. People get lost when they don't do
this. I've seen people try to manipulate math that they don't have a
good
grasp for, use equations that they have memorized but not seen as
obvious,
manipulate words piled atop words without a strong sense of what the
words
mean. The same caution holds true for manipulating thoughts about
"volition" or "collective volition". I know what the words mean because
I
invented them to describe specific things that I wanted to do for
specific
reasons. I did try to list some of the reasons in "Collective
Volition",
but I may have failed to convey a proper grounding. In fact, it seems
nearly certain that I have conveyed only a fraction of the grounding. I
do
not make my choices at random; if you can't see why I would possibly
want
to do something, you probably mis-visualized the thing I want to do.

So when I say: 'The reason may also be, "My short-range volition is
muddled and I possess a medium-range volition coherent with that of the
rest of humankind."'

I mean: 'The reason may also be, "I am making a decision based on
really
stupid and messed-up reasons, and if I knew more and thought longer I'd
come to pretty much the same conclusion as everyone else who thinks
about
the subject, and it wouldn't be the same as my current decision."'

Now, remember, I'm not saying that our extrapolated volitions should
always
override our current decisions. I'm saying that my current decision is
that the decision as to whether our extrapolated volitions should
override
our current decisions should be made by our extrapolated volitions.
What
if your current decision disagrees with the decision of our extrapolated

collective volition about whether the decision should be made by your
current decision or your extrapolated volition? This means that:

1) Eliezer screwed up the dynamic.
2) Your current decision doesn't have a good handle on reality; there
are
enormous consequences you don't foresee, and if you knew the true
consequences, you would repudiate your decision.
3) A nicer person who was otherwise extremely similar to you would
repudiate your decision.
4) Everyone else's volition doesn't agree with your decision, and
there's
no way for you to convince Eliezer to let you personally take over the
world, and Eliezer isn't willing to personally take on the onus of
writing
an individual volition dynamic because there's no way for a humane
superintelligence to veto his decision in case it turns out to be wrong.

There are all kinds of reasons why our volitions might contradict our
decisions, and most of them involve either being stupid - unforeseen
consequences, missing obvious solutions - or being not the people we
wished
we were, i.e., the sort of reason Samantha is justly worried about being

extrapolated just the way she is rather than a wiser version of herself,

and that Robin doesn't want murderous thoughts to kill people.

It is noteworthy that criticisms seem to be equally divided between
people who:

A) Think that humans are too awful for their volitions to be
extrapolated.
(What am I supposed to extrapolate instead?)

B) Think that people's present-day decisions are just fine, and this
whole
volition-extrapolating thing is unnecessary. (Are you absolutely sure
about that? You would go ahead and do it even if you knew that with
another few years to think about the subject, you would change your mind

and be horrified at your previous decision?)

Maybe I should let the two sides fight it out on their own, and then
take
on the winner, if they haven't already struck a compromise that looks
just
like "Collective Volition".

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.665 / Virus Database: 428 - Release Date: 4/21/2004
 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT