From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 05 2002 - 10:22:55 MDT
Gordon Worley wrote:
>
> You are playing a gambling game. You have $500 dollars. First you are
> given a choice: you can either be given another $100 or you can try to
> win $500 more, but if you don't win, you get nothing. It doesn't matter
> what the odds are, $100 is almost always the more rational choice
> because it guaranteed and most people will pick that one. Then you are
> given a second choice: you can either lose $100 or play a game where
> you might not lose any money, but if you loose you'll lose $500. Most
> people will pick the latter in this case, which is an irrational
> choice. The situation is the same as the first time, the only thing
> that changed was the sign on the numbers. (example paraphrased from one
> in CFAI)
This paraphrase is incorrect (or if not, I had better correct the original;
where is it in CFAI?). CFAI should contain a paraphrase of Tversky and
Kahneman on the framing effect. The framing effect experiment is as
follows: Subjects are told to assume themselves $300 richer and are asked
to choose between a definite gain of $100 versus a 50% chance of gaining
$200, or subjects are told to assume themselves $500 richer and are asked to
choose between a sure loss of $100 versus a 50% chance of losing $200.
Humans tend to choose the sure gain in the first case and gamble on avoiding
all loss in the second case. The critical point is that this framing effect
holds true even though the outcome tree is exactly the same in both cases.
What this shows is that human decisions are the outcome of a balance of
subjective cognitive forces, not a utility function. (Caution: My causal
account is based on my own understanding of intelligence and may differ
slightly from the conventional causal account.) In one case the subjective
attractiveness of a novel $100 gain is balanced against the subjective
attractiveness of a $200 gain, and the subjective attractiveness of a sure
gain balanced against the subjective attractiveness of a risky gain. Since
there is some subjective attractiveness that results simply from the charm
of gaining money, irrespective of amount, $200 is not twice as attractive as
$100. Furthermore, being told that something has a 50% probability does not
multiply its subjective attractiveness by .5 - rather it is processed as a
"risk" of a certain subjective unpleasantness.
Meanwhile, on the opposite side, subjects told to assume that they are $500
richer and choose between a sure $100 loss or a possible $200 loss. I
expect that the primary force driving the decision is a sense of entitlement
to the full $500; having been told that it is theirs, the subjects do not
wish to give it up. To make a decision to accept the full loss would
require giving up that sense of entitlement; giving up the possibility of
keeping everything. This is difficult emotionally, so they choose to gamble
on a 50% chance of keeping everything.
Or at least that is how I would explain it. I would note for the record
that there are currently papers arguing that the current experimental
evidence cannot be accounted for by subjective utility functions, even
sliding utility functions that morph at different levels of wealth; and it
is IMHO more plausible a priori that the human mind makes decisions using
complex subjective emotional mechanisms.
Since everyone knows that $200 * .5 = $100, the subjects would find that
"rationality" (as they know it) provides no advice on how to proceed, and
would make decisions based strictly on subjective value. This is not true
under Gordon Worley's paraphrase above. A 50% chance of winning $500 is
almost always a better bet than a sure gain of $100, unless for some reason
you desperately need an extra $100 and no more. Likewise, a sure loss of
$100 is better than a 50% chance of losing $500, unless losing $500 would
put you under some critical threshold. But note that Gordon Worley says
that the sure gain of $100 is "better" because it is "guaranteed". As a
deliberative thought, this statement seems to correspond to the perceptual
flow of subjective value which I hypothesize for humans.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT