Re: The Conjunction Fallacy Fallacy [WAS Re: Anti-singularity spam.]

From: Richard Loosemore (
Date: Mon Aug 28 2006 - 10:59:01 MDT


My last words in this argument follow.

This discussion has now become so convoluted and distorted that some of
the distortions are developing a life of their own, and the central
points are getting lost.

I want to try to simplify it by looking at only two points: I am not
doing this because I cannot defend anything else, but only, as I say,
for the sake of clarity.

*** NOTE: Anyone who is bored can just read this prelude:

I really, really hate to pull rank this, but to Eliezer and to Jeff
Medina I ask this question: do you, either of you, have a postgraduate
degree in cognitive science?

And when you got your postgraduate degree in cognitive science, did the
Psychology Department where you got the degree decide that, for the
first time in their history, they would have to create a new
"Distinction" class, specifically to attach it to your degree?


Well goes who did?

Back in 1987.

Point #1: Explaining away [sic] the Linda results.

Eliezer has repeatedly said (and Jeff Medina has now echoed this,
unfortunately) that what I have done is to make a certain claim, namely
that the Linda results can be completely explained by the fact that the
subjects in that experiment were simply misunderstanding the
instructions. Here is an example of him saying this:

> The referenced paper by Chart does not support Richard's explicit
> assertion that the conjunction fallacy arose from a particular
> misunderstanding by the research subjects (a possibility that was long
> since refuted a million ways from Sunday, see the paper referenced
> in my reply to Medina)........

He has hammered away at this point again and again, and he always
follows it with the conclusion that such an idiotic mistake on my part
can only mean that I am totally ignorant of the literature. And he
never goes further than this, claiming only that my ignorance is so
obvious and so egregious that he does not need to listen to anything
else I have to say.

But the bizarre thing is that from the very first words I wrote in reply
to this allegation, I explained that this was NOT what I had claimed,
and I then immediately gave a detailed explanation of exactly how my
REAL claim differed from the one Eliezer imputed to me (and I also
showed that I had been clear about this in my original argument, so if
Eliezer had read that argument thoroughly, he should never have

Let me demonstrate, by citing this part of my first response to
Eliezer's attack. This quotation includes within it some of the text
from my original post, where I first made the comments that Eliezer took
exception to:

[Loosemore writes:]
> You just attacked an argument that has nothing whatsoever to do
> with the claim that I actually made.
> The claim that I made was *not*:
> (1*) Tversky and Kahneman (etc.) failed to prove their case that
> human reasoning behavior is flawed, because there are
> good alternative explanations that can completely account
> for all of their data.
> In fact, I explicitly disavow this conclusion:
>>> [Yudkowsky:]
>>> In each of these experiments, human psychology fails to follow the
>>> rules of probability theory.
>> [Loosemore:]
>> Correct: because in normal discourse, human psychology is
>> required to carry out far more complex, broad-spectrum cognitive
>> processing than the mere calculation of probabilities.
>> People are not very good at doing strict probability calculations,
>> because those calculations require mechanisms that have to be trained
>> into them rather carefully, in order to avoid the problem of
>> triggering all those other mechanisms, which in the normal course of
>> being a thinking creature are actually a lot more useful.
> You will notice that I clearly acknowledge here that human reasoning
> ability is flawed. I am not challenging this basic point and saying
> that the experimental data supporting it can be explained away. That
> question (about the validity of claim 1*) was thoroughly addressed by
> the literature, and what those studies tried to do was to eliminate
> all the other factors that were contributing to the effects, to
> see if there was any residual effect of pure irrationality.

I hope everyone notices the thing that I marked (in the original) as
"claim (1*)" (note that I used the standard asterisk notation for a
claim whose validity is repudiated)...... this is the claim that
Eliezer, and now Jeff, have imputed to me, and yet there it is, sitting
in my post, followed by the words "In fact, I explicitly disavow this

How much plainer could I have been? Well, actually, I did try to be
even plainer, because I hammered it home again with the words "You will
notice that I clearly acknowledge here that human reasoning ability is
flawed. I am not challenging this basic point and saying that the
experimental data supporting it can be explained away."

I took immense pains, in my text, to say that I was making a far, far
more subtle and more interesting point than the trivial one in (1*).

And in the posts that I have made since then, I have again tried to
bring the discussion back to this point and repeat that Eliezer should
be addressing the actual argument I made, not the trivial and false one
that he claims that I made - but to no avail. He remains completely
fixated on this false interpretation, using it to say nothing else
except, in effect, "I know you are stupid so I don't have to listen to
your arguments."

I am especially disappointed that Jeff Medina has now weighed in and
repeated the allegation, even though he could see my repeated denials of
it. Jeff: one of your biggest concerns is with the irresponsibility of
people who spout their opinions without bothering to read the relevant
evidence, so how could you have missed the fact that I so explicitly
rejected this claim you impute to me, back there in the original post.
Did *you* read it thoroughly? And what did you make of it?

Point #2: The relationship between "Mental Models" and "Heuristics
and Biasses", and what this has to say about Eliezer's competence.

Something strange happened in the last day or so, and it is a perfect
illustration of the fact that Eliezer simply does not understand
cognitive science, but only one portion of it.

I am choosing my words very carefully here, because of course this
sounds very much like what he has been saying about me all along.
Notice that I have just demonstrated, above, that he based his
conclusion about my ignorance solely on a single, oft-repeated
allegation that is manifestly false. What I am going to do now is to
show, in a more or less conclusive way, that he really is guilty of
being ignorant of what part of cognitive science is what.

What just happened, in the last day or so, was that Mark Waser used the
phrase "mental modeling" and implied that what I was talking about was
*that* approach to explaining human reasoning.

His use of this phrase was ambiguous, but if it was interpreted to mean
"cognitive modelling" then it was certainly not completely inaccurate.
But it had nuances that I would not necessarily want to buy into. By
itself, not a big deal.

However, the very similar phrase "Mental Models" had come up in the
discussion earlier, in something that was actually a complete sidetrack,
so what then happened was that Eliezer (and to some extent Jeff Medina)
started to pick up on this and began to make accusations that (a) *I*
had claimed that my argument was about "Mental Models", and that (b)
this was clearly another demonstration of my stupidity, because I
claimed that Mental Models was a subfield of the Heuristics and Biasses

Here are the relevant quotes:

[Yudkowsky writes:]
>>> except insofar as Richard keeps seeming to think modeling is part
>>> of the heuristics-and-biases subfield, which it isn't.
[Mark Waser writes:]
>> Could you give an example of where Richard does this?
> Actually, Richard's error is much worse; he thinks
> heuristics-and-biases is a subfield of mental models.
> E.g. Loosemore: 'These people (Chater and Oaksford, at least) know
> everything there is to know about the entire field of human reasoning,
> including the subfield that Yudkowsky refers to as "heuristics and
> biases".'

[Amazing: even the quote that he cites does not say that!!]

and again:

[Yudkowsky writes:]
> ... nor Richard's rather huge implicit mistake in thinking
> that heuristics-and-biases is a subset of the field of
> mental models ...

and, from Jeff Medina:

[Jeff Medina writes:]
> That's where this aspect of the debate began; Eliezer is
> not trying to steer away from modeling, except insofar as Richard
> keeps seeming to think modeling is part of the heuristics-and-biases
> subfield, which it isn't.

The interested reader can track down the bit where the phrase "mental
models came up in the discussion. I won't elaborate it because it was
tedious and trivial, but the basic story is that Eliezer came across the
phrase when searching on "Phil Johnson-Laird" and spontaneously made an
unprovoked (implicit) allegation about my ignorance:

> I sampled two of these names, Mike Oaksford and Phil Johnson-Laird.
> I didn't hit any papers in the field of heuristics and biases.
> I see no evidence that they have ever worked in the field of
> heuristics and biases. I did see papers on the construction
> of mental models, which, if you happened to be completely
> unfamiliar with the field of heuristics and biases, could be
> mistaken for information relevant to interpreting the Linda
> experiment.

So, did I make actually say that "heuristics and biasses" is a subfield
of "mental models"?

The only thing I ever said about mental models was this, in direct
response to Eliezer's out-of-the-blue mention of the phrase, above:

> only someone who had a chronically deficient understanding of
> cognitive psychology would not know the connection between
> Mental Models and all the various human reasoning studies.

I then added that MM was a substantial component of the field of human
reasoning, and that Phil Johnson-Laird's name was not one to be
dismissed casually, the way Eliezer had tried to do.

My conclusion from this stuff about MM?

The first conclusion is just the same boring old story: I did not, as
you can see, make the claims that are now being repeated in an attempt
to convict me of ignorance. Not even slightly.

The second conclusion is far, far more interesting. Eliezer has now
also made a direct statement about the term "mental models" that shows
that his understanding of how this relates to human reasoning was gotten
from a quick Google search, not from actual understanding of cognitive

[Yudkowsky writes:]
> The people who write research papers about mental models, are not in
> general the same people who write research papers about heuristics and
> biases, although they do talk to each other and read the other field's
> classic papers. Still, someone who doesn't work professionally in the
> field of mental models, yet does know something about the theory of
> mental models, may not know anything about the theory of heuristics
> and biases. And I am informing you, flatly, that you don't.
> Any audience members can either trust me on this, or google "Mental
> models" followed by "Heuristics and biases". There's obvious
> relevance of one field to the other, but they are very different
> fields in practice - even as to the kind of daily work carried out by
> the practitioners - and cursory reading in both fields will
> confirm this.

Let me spell it out in nice, simple language: the general term "mental
models" will come up all over the place if you google it, and to an
outside observer like Eliezer, it might seem that there is little
overlap between this and "heuristics and biasses".

But the specific term "Mental Models" would mean something more to a
person who did research in human reasoning: such a person would say
"Oh, you mean Phil Johnson-Laird's book called "Mental Models"? Yeah,
that was one of the big, influential attempts to explain human reasoning
by something other than a pure formal-logic reasoning system. What he
tried to do was argue that people solved reasoning problems by a
something that might be called a heuristic: by constructing a mental
model and manipulating it."

That person might be inclined to say a whole lot more about that book
you might interrupt them and say "So is it possible that a person who
knew something about Johnson-Laird's Mental Models may not know
*anything* about the theory of heuristics and biases."

To which their reply would be: "Not unless they wanted to flunk their
Cognitive Psychology 101 course!". Nobody but a complete fool could
claim to have a student-level understanding of Mental Models, and Human
Reasoning, without also knowing about Heuristics and Biasses. They are
all three taught in the same course. They are in the same chapter of
the basic text in Cognitive Psychology.

So could someone say that "There's obvious relevance of one field to the
other, but they are very different fields in practice"?

Well, no, they are just different parts of the general topic of human
reasoning studies.

Eliezer, if you had to actually look up Phil Johnson-Laird's name to
find out who he was, and if, when you did that, you came across the
phrase "Mental Models" for the first time, then how the hell can you
claim to know anything about cognitive science?

And how dare you make these kinds of accusations:

> At this point you're matching one of the classic crank patterns, of
> someone who becomes acquainted with a tiny fraction of scientific
> knowledge, and who then decides that this science is so amazingly
> wonderful that it must be relevant to everything.

... in the face of your own manifest commission of exactly this pattern?

Richard Loosemore.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT