Re: Notice of technical term: "Volition"

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 15 2004 - 10:58:13 MDT


Philip Sutton wrote:

> Hi Eliezer,
>
>>It's not my intent to be Orwellian. But sometimes I invent new ideas,
>>and I have found it best to give them understandable labels.
>
> But this is exactly the problem - the label you chose was not
> understandable - to most people beyond yourself. If a word has a
> deeply embedded meaning in common speech

That's why I picked "volition", which sounds - at least to me - much more
strange and exotic, less likely to be used in everyday speech, than
"decision", "will", or "best interest". When was the last time you heard
someone not a philosopher say, "My volition is to drink a glass of water"?

> and you then try to use
> it in a special way - especially a way that is not just different from but
> almost the opposite of the normal meaning

With this I disagree. It is not the opposite of the normal meaning. It is
just that sometimes people's short-distance volitions oppose their
medium-distance or long-distance volitions *within a reasonable construal*
of volition.

I also point out that collective volition is not about democracy, and
moreover, I never said anywhere that it was about democracy. That seems to
be an assumption that other people are making. I am not turning over an SI
to a majority vote; that is genocide by genie bottle. At the same time, I
am not personally taking over the world, and I am not turning the world
over to a separate humane intelligence. I want to do something
*complicated*, and yes, I need a new word to describe it.

> then you must expect that
> people will misunderstand what you are saying.
>
> We already have a term in common usage that covers what you are
> talking about and it is the notion of acting in an entity's 'best interest'.

The connotations of this term are not correct. "Best interest" *is* a term
in ordinary usage, and that is a problem; in the ordinary usage someone
else construes the interest, and moreover, it is a human doing it. It is
not the same as a transparent optimization process extrapolating a person's
*decision* - not their interests, their *decision* - given a set of
cognitive transformations.

I refuse to use "best interest" *because* it is a term in common English
usage, and an extremely misleading one with respect to this novel thing I
want to describe. A collective volition is not going to look like a parent
or politician talking about best interests.

There is an art to picking words. I do not claim to be a master at it, but
it seems clear that "volition" is better than "best interest", because
volition is less often used, and has fewer connotations that cut against
the grain of what I want to say.

> So I might act in someone else's best interest - possibly even flatly
> contradicting what that person might say they want. But my defense
> will be to say that I understood the way to meet their needs better than
> they did. And they can counter-charge that I failed to understand them
> or that it was none of my business or whatever.

Another good reason not to use the word "best interest", since it has
connotations of defensible decisions in human politics. A collective
volition *extrapolates* politics but is of itself apolitical.

> You want the FAI to act in the best interest of humanity.

This is *not* what I want. See my answer to Samantha about the hard part
of the problem as I see it. I want a transparent optimization process to
return a decision that you can think of as satisficing the superposition of
probable future humanities, but that actually satisfices the superposition
of extrapolated upgraded superposed-spread present-day humankind.

> Why not call a
> spade a spade? You want you FAI to have a "human collective best
> interest" inference engine.

Because this sounds like someone is defining the "best interests" and
moreover the FAI will serve those "best interests" even when it contradicts
our informed decision, which is the With Folded Hands scenario and other SF
stories of Singularity Regret, which I am not silly enough to deliberately
do. Prime Intellect acts in the individual best interest, within the
intuitive sense of the way that often works in human politics; this is not
what we need.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT