From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 05 2006 - 14:20:22 MDT
Hmmm...
Richard, I will stand by my statement that, in this particular problem,
> > there's no conversational implicature here, and this error may
> > well be pure inferential stupidity.
Of course, I didn't mean to imply that inferential stupidity is the
ONLY reason anyone might get the wrong answer to this problem.
Inattention and misleading instructions could play a role, along with
other reasons that you mention.
For any one experiment one can make a lot of arguments regarding the
possible reasons humans make wrong inferences. A careful analysis of
human inferential performance requires analysis of behavior across a
lot of different experiments, and this is too much to do in detail in
spare time on this email list.
I note that it would be pretty easy to make Novamente's inference engine either
a-- get wrong answers on these inference puzzles, similar to the ones
many humans get
b-- get correct answers to these inference puzzles
The difference between a) and b) has to do with various parameters one
can set in Novamente's inference control process, not in the system's
inference rules itself.
So, my own view is that humans make these errors, not because they are
fundamentally incapable of doing correct probabilistic inference, but
because for a variety of reasons when confronted with reasoning
problems they utilize an improperly or inadequately tuned inference
control process. This may be because of inattention (not allocating
enough attention to the inference), it may be because of misleading
cues that bias the inference control process to look at the wrong data
or to be biased toward using the wrong knowledge or inference rules,
etc.
Anyway, this is a deep issue, but my point is that by "inferential
stupidity" (an imprecise phrase, I admit) I did not mean incapability
to carry out correct uncertain inferences, but rather, stupid control
of the inference process in the particular context in question.
-- Ben G
On 5/5/06, Richard Loosemore <rpwl@lightlink.com> wrote:
> Ben Goertzel wrote:
> >> Consider a regular six-sided die with four green faces and two red
> >> faces. The die will be rolled 20 times and the sequence of greens (G)
> >> and reds (R) will be recorded. You are asked to select one sequence,
> >> from a set of three, and you will win $25 if the sequence you chose
> >> appears on successive rolls of the die. Please check the sequence of
> >> greens and reds on which you prefer to bet.
> >>
> >> 1. RGRRR
> >> 2. GRGRRR
> >> 3. GRRRRR
> >>
> >> **
> >>
> >> Obviously, 65% of the undergraduates in this study, betting real money,
> >> chose to bet on (2) over (1) because of conversational implicature
> >> prototyping ecological triggering mechanisms. I'm sure they wouldn't
> >> have made the same mistake if only the instructions had been written in
> >> blue ink.
> >
> > Indeed, there's no conversational implicature here, and this error may
> > well be pure inferential stupidity. (One could try to argue that
> > there is a connections with what kinds of patterns are most commonly
> > observed in nature... but I don't see quite how at the moment...). It
> > seems people are just reasoning something stupid like "Evenly balanced
> > sequences are more likely"
> >
> > However, some inferential errors may well be partly or largely caused
> > by conversational implicature and other such factors...
> >
> > One of the beautiful things about us humans is that we have soooo many
> > different ways of being stupid, with so many different causes ;-)
> >
> > ben
>
> I am just a little stunned at the two comments above (Eliezer's sarcasm,
> and then your partial agreement that this cannot be explained except by
> pure inferential stupidity).
>
> Stunned, because I am amazed at the inferential(?) stupidity that allows
> two AGI researchers to look at this experiment and not be able to see
> any factors that could have been involved. [NOTE: I use the word
> "stupidity," of course, not to get personal :-), but just as a
> rhetorical invitation for you to contrast and compare, if you will, your
> own response to the experiment to the subjects' responses].
>
> So here goes.
>
> First, when subjects turn up to do experiments in psychologists' labs,
> all kinds of junk is going through their heads. One of the main things
> is that, as you know, the general population has some pretty weird ideas
> about what psychology actually is, and what the researchers are up to.
> A lot of the time, I think, they view the experiment as some kind of
> test or competition between them and the experimenter, and they have to
> 'prove' themselves smart in the experimenter's eyes. This, if nothing
> else, lends an element of stress in even the most relaxed of settings.
>
> Second, the experimenter cannot answer questions (often), so if the
> subject reads the instructions and cannot understand them, they are
> supposed to do their best to interpret them. I don't know if
> clarification questions were allowed in the above experiment. Do you?
> I suspect they were not.
>
> Third, subjects are often hunting for "the" way to solve the problem set
> to them. For whatever reason, they assume that it would somehow be
> tricky or deceitful if the experimenter set them problems that require
> them to deploy more than one strategy. So if they find one obvious
> strategy, they go with that.
>
> Fourth, when they are given instructions by the experimenter (like
> "Relax and take your time, because it does not matter"), they sometimes
> (and perhaps frequently) appear to disregard some of the instructions
> (so they hear the experimenter tell them to take their time, but they
> believe that the experimenter is secretly measuring the time it takes
> them, anyway).
>
> Fifth, when the wording contains any element of ambiguity whatsoever
> (and sometimes ambiguity can be forced by the time constraints of the
> experiment, and not be at all obvious when we, at our leisure, read the
> instructions afterwards), they try to construct a model of what the
> experimenter is probably trying to get them to do, to help them resolve
> their confusion.
>
> Sixth, the experimental materials can easily contain confusing features
> that nobody ever discovers. (Such damaging features are discovered so
> often in the years after an experiment is published, without ever being
> noticed by the original experimenters, that it makes you wonder just how
> much effort anyone ever takes to analyze possible problems with a design
> before they go ahead and do it).
>
> You're beginning to get the picture, I'm sure.
>
> So in the case of the above experiment, I had forgotten the punchline of
> the experiment, so I looked at it and did it on the spot.
>
> Hey presto: my first answer was (2)!
>
> Why? Because, even though I had been primed by all the Conjunction
> Fallacy stuff in the last week, I completely forgot to notice that the
> the first string was shorter than the other two (perhaps because all the
> RRRR strings misled my eye, in the way that you may, depending on your
> mail client, have had your own eye misled earlier in this sentence). And
> on the other hand I did notice (because I had been primed by the only
> relevant information given in the instructions) that there were
> differences between the proportions of Gs and Rs in the three strings.
> I was also rushing because I felt some urgency to prove to myself that I
> could polish it off quickly.
>
> Well well well. I am stupid and irrational, apparently! And there is
> no other explanation for my (or the subjects') behavior: it really
> cannot be anything except irrational stupidity.
>
>
>
> Some of the troubling little factors I catalogued above are matters of
> good experimental design, but some of them are just so embarrassing to
> the entire psychological community that they are ignored. I mean, if
> *all* subjects come to your lab trying to second guess your intentions,
> no matter what you say to them, what the hell do you do? Give up being
> an experimental psychologist? Of course not: you agree with all the
> other psychologists to overlook the problems (at least, if your name is
> not Gibson), and say that it will all come out in the wash.
>
> These are some of the reasons why the conclusions derived from
> experiments like these (and the other several hundred in the literature)
> don't merit the kind of most-humans-are-irrational triumphalism that I
> see around this list all too often. To say nothing of the kind of silly
> sarcasm that only seems to demonstrate a complete ignorance of both the
> complexity of experimental design and the implications that that has for
> the robustness of conclusions derived from experiments.
>
> [And YES YES YES, if you could somehow do the impossible and eliminate
> all the factors I just mentioned (you would have to at least draw the
> subject's attention to the fact that the first string is shorter than
> the others (*)), I know that we are still going to get an interesting
> (but reduced) percentage of people who simply cannot relate the length
> of the string to the probability of getting the string to come up. But
> in that case see my parallel arguments in the other post, which still
> apply.]
>
> If either of you were actually speaking to a more robust version of the
> above experimental design, when you assessed the implications in your
> comments above, then let me know. If you were, that would have been a
> little sneaky, no?, because that really was not the experimental
> protocol that was quoted, and I, poor fool, have referenced that
> particular protocol in my example and not wasted a day chasing through
> the literature looking for other, more sophisticated versions of the
> design that eliminated ALL of the above issues (never learn, do I? :-)).
>
>
> Richard Loosemore.
>
>
>
> (*) Make the subject take part in an example session in which a bunch
> of pseudo subjects take different positions (some of them betting on the
> short sequences), and give the subject the opportunity to observe a few
> cases where someone else goes for a short sequence that is identical to
> their own sequence up to that point, but in which the subject doesn't
> get the $25 payout (while of course the other person does) and instead
> have to wait for one more roll of the dice to see if they win. Watch
> their face after that situation comes up (just for fun), and then later
> give them the real test. I bet $25 that after that, the number of
> stupid-irrationals goes down by a factor of two, at least. Maybe even
> eliminate the effect. Doesn't prove a lot, except the sensitivity of
> the experiment to factors other than inherent stupidity - which latter
> ought to be constant, no?
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT