Constructing volition (was: my doubts)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 16 2003 - 16:09:06 MDT


Wei Dai wrote:
> [Not cc'ing wta-talk because I'm not on that list.]
>
> On Thu, Sep 11, 2003 at 04:55:24PM -0400, Eliezer S. Yudkowsky wrote:
>
>>Suppose we assume that X has an objective frequency given A, F(x|a). If
>>the subjective frequency assigned by the person to P(x|a) doesn't match
>>the objective frequency F(x|a), then to a first approximation we can say
>>that the subject's "volition" as a moral desideratum should be computed
>>using U(x)F(x|a), while the subject's actual decisions will in fact be
>>computed using U(x)P(x|a). In other words, your "volition" is an abstract
>>entity which your actual decisions only approximate; your volition is the
>>decision you would make if you had perfect information.
>
> Are you aware that given any utility function U and probability
> function P, it's possible to find an infinite set of different pairs of
> functions U' and P', such that the preference ordering over choices
> determined by U' and P' is the same as the preference ordering
> determined by U and P? Someone with U' and P' would behave identically
> to someone with U and P in every possible situation.
>
> Therefore, there's no objective method to separate a person's
> actual preferences into a value component and a belief component, and
> then to compute what his preferences would be if he had "correct"
> beliefs.
>
> How would volitionism deal with this problem?

Classical economists tend to construct U(x) by looking at a person's
actual choices, assuming their probability function P obeys Bayesian
rules, and assuming that their choices obey the expected utility rule.
Cognitive psychologists know darn well that people don't use Bayesian
reasoning on probabilities and that their choices aren't consistent under
expected utility, so they try to figure out the probabilities actually
assign, and the forces that actually influence their choices.

As humans, we simply don't *have* a U(x) in the sense of the expected
utility rule - we are not that consistent. We do, however, have a
decision system that relates utilities to outcomes, outcomes to
probabilities, probabilities to actions, and uses all that information to
assign desirabilities to actions. We just don't do it consistently.

Some analyses of utility assume that humans are black boxes and that we
are allowed only to look at their actions, and that we are supposed to
deduce a utility function and probability assignment from these actions
and the (false) assumption that people are perfectly structured according
to Bayesian probability and expected utility. If so, there would be an
infinite set of U', P' that could generate those choices, but that might
not generate any ambiguity in the set of possible *volitional* orderings
over the same choices. For example, if you multiply all utilities by a
non-negative constant, you get the same preference orderings. You also
get the same volitional orderings. So the apparently infinite range of
U(x) merely reflects the fact that utility is a measure rather than a
function. (That is, the "utility" in "expected utility", which is a
mathematical construct, is a measure rather than a function; this is not
intended as an assertion about human cognition.)

I haven't looked into the result you refer to, Wei Dai, but my initial
impression is that it assumes infinite degrees of freedom in both U(x) and
P(x). Leaving aside the former, the latter, at least, is usually assumed
to normatively obey Bayesian probabilities, and cognitively it obeys
certain loose non-Bayesian rules. We are surprised when we see that under
certain circumstances people assign subjective likelihoods P(A&B) > P(A),
but having discovered this, we can then predict that most people will do
it most of the time under those conditions. So there are not infinite
degrees of freedom in P(x), either normatively or cognitively, and if you
use this constraint to construct U(x) you will not find infinite relevant
degrees of freedom in U(x) either. Of course I am only reasoning
intuitively here, and I may have gotten the math wrong.

But, mostly my reply is that I'm not using the economist assumption that
we have to construct U(x) and P(x) by looking exclusively at people's
choices. From a volitionist standpoint, what I would like to do is open
up people's heads and look at their mind-state, figure out what systems
are working, what they contain, what actual system is producing the
choices, and then, having this functional decomposition, ask which parts
have U(x) nature and which parts have P(x) nature, bearing in mind that
they will overlap. Existing cognitive psychology actually goes quite a
ways toward doing this.

Or to put it another way... suppose I show someone a spinner wheel A that
is one-tenth red, and another spinner wheel B that is nine-tenths red. I
ask: "Would you rather have a chocolate ice cream cone given that A shows
red, or a vanilla ice cream cone given that B shows red?" Technically we
should also show an empty spinner C, in case the person is on a diet and
doesn't want an ice cream cone at all. Suppose the person waves aside our
offer to opt out of the experiment.

If the person then says they would rather have a chocolate ice cream cone
on A, rather than a vanilla ice cream cone on B, it *could* be that they
like vanilla more than chocolate, but that they think that B is extremely
unlikely to come up red for some odd reason, or that they think A is
unusually likely to come up red for some odd reason. But most people,
modeling what is actually likely to be going on inside the chooser's head
- inferentially, rather than deductively - would make the pragmatic guess
that the person prefers chocolate to vanilla.

And if we opened up the person's mind-state and looked inside to see what
kind of information-processing was going on, we would find some subjective
likelihood for A and B was being generated, some expected liking was being
generated for vanilla and chocolate, and that these two cognitive datums
interacted to generate desirabilities for the actions and finally a choice.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT