Re: Overconfidence and meta-rationality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Mar 21 2005 - 00:48:51 MST


One of the arts I now espouse as being pragmatically useful for human
rationality is an art of sticking as close to the question as possible,
in terms of causal proximity and sufficient indicators.

To steal an example from Judea Pearl (a Pearl of wisdom, in Emil's
hideous phrase), suppose we draw a causal graph as follows: The warue
of the SEASON variable affects the probability of the SPRINKLER being on
and the probability of RAIN falling, which in turn can make the sidewalk
WET, which means the sidewalk might be SLIPPERY. So which SEASON it is,
has a definite effect on whether the sidewalk is SLIPPERY. But if we
measure the variables RAIN and SPRINKLER, or even just the variable WET,
then the variable SEASON can provide us no *additional* information
about the variable SLIPPERY. SEASON becomes conditionally independent
of SLIPPERY once WET is measured.

Similarly, suppose that the Wright Brothers are about to launch the
Wright Flyer and someone walks up and says: "Human flight is a
religious concept. There's no evidence that human beings can fly; the
only instances of flying human beings are angels in religious paintings.
  Since this is obviously a religiously driven enterprise based on pure
faith, that Flyer will never fly. Every past plane has crashed, and my
empirical generalization is that your plane will crash too; that's the
scientific method."

If every previous plane has crashed, then induction does suggest that
this plane will crash too. But "every previous plane has crashed" is a
vague and semitechnical hypothesis; it can't compete with a technical
theory of aerodynamics that predicts quantitatively when, where, and how
hard a plane will crash. And this same *technical* theory of
aerodynamics predicts the Wright Flyer will fly. (See _A Technical
Explanation of Technical Explanation_.) From a Bayesian standpoint the
technical theory eats the semitechnical theory, and swallows it
entirely, leaving no scraps of data for the semitechnical theory to
explain. So there's no use in standing around indignantly repeating,
"But every previous plane has crashed! Yours must crash too!"

It's an empirically undeniable fact that enterprises based on pure faith
tend not to fly. The accusation of religious thinking is not an
inferentially irrelevant argument.

But once I produce a theory of aerodynamics with which to analyze the
Wright Flyer, I render irrelevant any information about the Wright
Brothers' motives. Once we have the aerodynamic analysis, we have
measured a variable standing in much closer causal proximity to the
matter of interest than the Wright Brothers' psychology. The flying or
non-flying of the Wright Flyer is conditionally independent of the
Wright Brothers' religious beliefs given that we have analyzed the
aerodynamics of the Wright Flyer. Nature doesn't care directly about
whether the Wrights are driven by religious faith or a properly gloating
atheism; Nature only checks the proximal indicator of how the plane is
put together. Religious thinking only affects the plane through the
intermediate cause of the plane's design.

This is why, when people accusingly say the Singularity is a religious
concept, or claim that hard takeoff is inspired by apocalyptic dreaming,
I feel that my best reply remains my arguments about the dynamics of
recursively self-improving AI. That question stands in closer causal
proximity to the matter of interest. If I establish that we can (or
cannot) expect a recursively self-improving AI to go FOOM based on
arguments purely from the dynamics of cognition, that renders the matter
of interest conditionally irrelevant on arguments about psychological
apocalyptism.

Of course the people who originally launched the argument still stand
around afterward indignantly saying "But... but... it sounds
apocalyptic!" That's human nature. "You can't tell me the sidewalk
isn't slippery! It's fall! It often rains in the fall!"

There's an art of sticking as close to the question as possible -
arguing about issues that stand in the closest possible inferential
proximity to the main question; trying to settle questions that, if we
knew the answers to them, would render more distant questions irrelevant.

And this is a valuable habit, because where anyone can argue about the
other guy's psychology, or which ideas match a vague category that tends
to fail, arguing in close proximity to the question tends to force you
to study technical things - to learn something about science, something
you'll hopefully remember even when the issue has passed. Yes, I know,
that argument isn't relevant to the Way of cutting through to the
correct answer on only this one specific question. But getting into the
habit of arguing technical things instead of arguing psychology is a
learned behavior that, over time, ends up mattering a great deal in the
pragmatic human business of rationality.

That's another reason why I don't trust the modesty argument. It seems
to me that you can argue indefinitely over who's more rational, without
ever touching on the meat of a question. Robin Hanson and I have been
tossing arguments back and forth at each other. Imagine if, instead of
doing that, we just argued about which of us was more inherently
rational and therefore should be assigned the greater weight on the
question of modesty, *without* ever touching on our reasons for
approving modesty or not. (This is not to be confused with our separate
argument over whether I (Eliezer) can rationally estimate myself to be
more rational than average; I am arguing the affirmative, but I am not
saying that Hanson should therefore accept my opinion on the modesty
argument, reasons unseen. In that sub-argument my (estimate of my own)
rationality is a direct matter of interest, not being argued in order to
infer something else. Such are the hazards of choosing
"meta-rationality" as the main question.)

If I am rational, then I should have decent reasons - Bayesian causes -
for believing as I do. Once I have disgorged my reasons for believing
something, my rationality becomes much less inferentially relevant to
whether my belief is probably correct. My causes for belief, if I have
told them truly and completely, stand as a variable in closer causal
proximity to the matter of interest than my 'rationality'. My
'rationality' is expressed only in the causes that influence my beliefs.
  If, despite being rational most of the time, I admit unusually stupid
causes for belief on one occasion, I will probably end up being wrong on
that occasion. Or a usually irrational person, who happens to admit
rigorous reasoning on one occasion, will probably be right on that occasion.

Thus, I still think that people who disagree should, pragmatically, go
on arguing with each other about the matter of interest, instead of
immediately compromising based on a belief in the probable rationality
of the other. If two people really do happen to agree on their
estimates of each other's psychological rationality, then sure, they can
go ahead and compromise their probabilities; but they ought still to
tell their reasons to one another, just in case, so that they learn
something. If two people each think the other is being irrational, then
at least one of them must not be very meta-rational - but if so, they
can still learn more by arguing with each other about the direct facts
of the matter than by arguing over which of them is the
non-meta-rational one.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT