From: Richard Loosemore (rpwl@lightlink.com)
Date: Sat Aug 26 2006 - 20:38:59 MDT
[begin part 9]
************************************************************
* *
* What Happens When Humans are Not Perfectly Rational? *
* *
************************************************************
Eliezer S. Yudkowsky wrote:
> Here's a copy of a section from a book chapter I
> recently wrote - the book chapter being titled "Cognitive
> biases potentially affecting judgment of global risks",
> for Nick Bostrom's forthcoming edited volume "Global
> Catastrophic Risks".
>
> **
>
> 4: The conjunction fallacy
>
> Linda is 31 years old, single, outspoken, and very
> bright. She majored in philosophy. As a student,
> she was deeply concerned with issues of discrimination
> and social justice, and also participated in anti-nuclear
> demonstrations.
>
> Which of the following is more probable:
> 1) Linda is a bank teller and is active in the
> feminist movement.
> 2) Linda is a bank teller.
>
> 85% of 142 undergraduates at the University of British
> Columbia indicated that (1) was more probable than (2).
> (Tversky and Kahneman 1983.) Since the given description
> of Linda was chosen to be similar to a feminist and
> dissimilar to a bank teller, (1) is more representative
> of Linda's description. However, ranking (1) as more
> probable than (2) violates the conjunction rule of
> probability theory which states that p(A & B) ? p(A).
> Imagine a sample of 1,000 women; surely more women in
> this sample are bank tellers than are feminist bank
> tellers. The original version of this study included 6
> other statements, such as "Linda is an insurance
> salesperson" and "Linda is active in the feminist
> movement", and asked students to rank the 8 statements
> by probability. (Tversky and Kahneman 1982.) However,
> it turned out that removing the disguising statements
> had no effect on the incidence of the conjunction
> fallacy - one of what Tversky and Kahneman (1983)
> characterize as "a series of increasingly desperate
> manipulations designed to induce subjects to obey
> the conjunction rule."
Human minds are designed for immensely sophisticated forms of cognitive
processing, and one of these is the ability to interpret questions that
do not contain enough information to be fully defined (pragmatics). One
aspect of this process is the use of collected information about the
kinds of questions that are asked, including the particular kinds of
information left out in certain situations. Thus, in common-or-garden
nontechnical discourse, the question:
Which of the following is more probable:
1) Linda is a bank teller and is active in the
feminist movement.
2) Linda is a bank teller.
Would quite likely be interpreted as
Which of the following is more probable:
1) Linda is a bank teller and is active in
the feminist movement.
2) Linda is a bank teller and NOT active in
the feminist movement.
It just so happens that this question-form is more likely than the form
that follows the strict logical conjunction. In fact, the strict
logical form is quite bizarre in normal discourse, and if we intended it
to actually ask it, we would probably qualify our question in the
following way:
Which of the following is more probable:
1) Linda is a bank teller and is active in
the feminist movement.
2) Linda is a bank teller, and she might be
active in the feminist movement or she might
not be - we don't know either way.
We would make this qualification precisely because we do not want the
questioner to bring in the big guns of their cognitive machinery to do a
reading-between-the-lines job on our question.
It might seem that this analysis of the Tversky and Kahneman studies
does not apply to one of your other examples:
> Please rate the probability that the following
> event will occur in 1983...
>
> [Version 1]: A massive flood somewhere in
> North America in 1983, in which more than
> 1,000 people drown.
>
> [Version 2]: An earthquake in California
> sometime in 1983, causing a flood in which
> more than 1,000 people drown.
>
> Two independent groups of UBC undergraduates were
> respectively asked to rate the probability of
> Version 1 and Version 2 of the event. The group
> asked to rate Version 2 responded with significantly
> higher probabilities.
These are two independent groups, so neither sees the other question and
therefore they cannot read between the lines and infer that the
questioner might be leaving out some information. On the face if it,
this seems like good evidence of the Conjunction Fallacy.
But is it? The two groups have to separately visualize the scenarios.
What are the detailed scenarios that they visualize? It seems prima
facie quite reasonable that it never even occurred to the first group
that a flood could be a side effect of a massive earthquake: they just
tried to judge the likelihood of a flood for other reasons, and in their
experience they did not recall any "ordinary" floods that caused that
many fatalities, so they respond with low probability.
The other group, however, have had the idea put into their head that an
earthquake might occur, and that (by the way) this might lead to flood
fatalities. They have no idea whether this connection (earthquake leads
to flood) is valid, but that issue seems not to be the question (indeed,
it is NOT the question) so they take it as something of a given, and
then simply fall back on their estimate of an earthquake probability
with many fatalities. Whether they are correct in their estimate or
not, they rate *that* probability quite highly. Higher than "flood but
no earthquake".
So, once again, the experimental design is effectively comparing
incompatible processes going on inside these people's heads. Or, to be
more precise, it *could* be doing this (my suggested interpretation
would have to be tested: I am only giving an existence proof for an
alternative explanation -- I could do the experiment, or maybe somebody
already did do the experiment.
> In each of these experiments, human psychology
> fails to follow the rules of probability theory.
Correct: because in normal discourse, human psychology is required to
carry out far more complex, broad-spectrum cognitive processing than the
mere calculation of probabilities.
People are not very good at doing strict probability calculations,
because those calculations require mechanisms that have to be trained
into them rather carefully, in order to avoid the problem of triggering
all those other mechanisms, which in the normal course of being a
thinking creature are actually a lot more useful.
But now, how does all this apply to the topic of discussion, which is
whether or not we can make comments on Kurzweil's, or other people's
estimates of the probability of future scenarios?
If anyone is out there making arguments that strictly depend on the
probabilities of conjunctions, and if they are screwing up in their
probability calculations, then go for it: wap 'em on the head with the
Conjunction Fallacy and let's all agree that these futurist predictions
are wrong.
But if the people involved are NOT intending to strictly conjoin their
probabilities, but are instead
(a) giving clusters of likely contributing factors or trends that
depend on intuitive judgements that involve a battery of cognitive
mechanisms barely dreamed of in anyone's philosophy right now, or
(b) giving semi-independent lines of argument in which the degree of
dependence is almost impossible to specify, whether you are a
self-avowed "rational thinker" or not, and which it would therefore be
churlish to attack because the probabilities were not conjoined properly,
then all this talk of human irrationality is .... irrelevant and (dare I
say it) irrational.
Irrational, because it presumes a model of human cognition that seems,
on the face of a lot of cognitive science evidence, to be impoverished:
it assumes a model that gives undue prominence to the role that is, or
should, be played by probability judgments. There are other models of
cognition that do not give probabilities such a starring role.
If an AGI researcher's entire worldview is that thought is or should be
based on "rational" judgement of probabilities, and that the human
cognitive mechanism is extremely badly designed and can be improved upon
by a more rational design, and if they apply this worldview to their
scientific search for the mechanisms required to build an AGI, are they
going to even *see* evidence that might contradict their worldview, or
will they instead blind themselves to the evidence?
Will they ignore the possibility that human cognition does subtle things
that seem to more, rather than less powerful than strict probability
evaluation (as in the examples of human responses to the above
experimental questions)? Will they scorn the human mind as "irrational"
and go looking for confirmatory evidence for this thesis?
And if they did start to see this evidence, would they be able to cope
with the cognitive dissonance involved in discovering something that
undermined their worldview? The rational behavior, after all, would be
to accept the evidence and go back and revise their worldview. Not many
people can do that: cognitive dissonance trumps rationality, it seems.
Richard Loosemore
[end part 9]
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT