Re: Bad Bayesian - no biscuit!

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sat Jan 22 2005 - 09:40:21 MST


Brett Paatsch wrote:
> Or as Feynman ([para 39] in accompanying post) said:
>
> "This method [science] is based on the principle that observation is the
> judge of whether something is so or not. All other aspects and
> characteristics of science can be understood directly when we understand
> that observation is the ultimate and final judge of the truth of an
> idea. But "prove" used in this way really means "test", in the same way
> that hundred-proof alcohol is a test of the alcohol, and for people
> today the idea really should be translated as, "The exception tests the
> rule." Or, put another way, "The exception proves that the rule is
> wrong". That is the principle of science. If there is an exception to
> any rule, and if it can be proved by observation, that rule is wrong."

And as Feynman said in the _Lectures on Physics_:

"Philosophers, incidentally, say a great deal about what is absolutely
necessary for science, and it is always, so far as one can see, rather
naive, and probably wrong. For example, some philosopher or other said it
is fundamental to the scientific effort that if an experiment is performed
in, say, Stockholm, and then the same experiment is done in, say, Quito,
the same results must occur. That is quite false. It is not necessary
that science do that; it may be a fact of experience, but it is not
necessary. For example, if one of the experiments is to look out at the
sky and see the aurora borealis in Stockholm, you do not see it in Quito;
that is a different phenomenon. "But," you say, "that is something that
has to do with the outside; can you close yourself up in a box in Stockholm
and pull down the shade and get any difference?" Surely. If we take a
pendulum on a universal joint, and pull it out and let go, then the
pendulum will swing almonst in a plane, but not quite. Slowly the plane
keeps changing in Stockholm, but not in Quito. The blinds are down, too.
The fact that this has happened does not bring on the destruction of
science. What is the fundamental hypothesis of science, the fundamental
philosophy? We stated it in the first chapter: the sole test of the
validity of any idea is experiment. If it turns out that most experiments
work out the same in Quito as the do in Stockholm, then those "most
experiments" will be used to formulate some general law, and the those
experiments which do not come out the same we will say were a result of the
environment near Stockholm. We wil invent some way to summarize the
results of the experiment, and we do not have to be told ahead of time what
this way will look like. If we are told that the same experiment will
always produce the same result, that is all very well, but if when we try
it, it does not, then it does not. We just have to take what we see, and
then formulate all the rest of our ideas in terms of our actual experience."

I reply: Nonetheless we *observe* that the same experiment *does* return
the same answer in Quito as in Stockholm, once we understand how to perform
the "same" experiment. The more fundamental the level on which we compute
our model, the more the underlying laws are exactly the same in every
observation. This is not a thing that philosophers dreamed up a priori; it
is a thing humanity has discovered through experience - but nonetheless it
is so.

It may be that someday we will understand that reality is *necessarily*
regular, that this is the way things *must* be, and that it could not have
been any other way. Historically, humanity will still have discovered this
point from observation, but our future selves may be so strongly attuned to
reality that, like the universe itself, they cannot conceive of things
being other than the way they are. Or not. I don't know if that's what I
would want, even given the premise. But if some future intelligence *does*
somehow manage to become absolutely certain of something, and ve indeed
wins on every single prediction, vis Bayesian score would be higher than
mine - even if contemplating vis absolute certainty would give a modern-day
rationalist the squiggles. I don't find it implausible that in reality
itself there are things that absolutely cannot be other than the way they
are, even if a rationalist cannot avoid uncertainty about them.

Feynman's advice, in the classical tradition of rationality, is about the
way in which human beings discover things, and about the fallibility of
human discoveries even after they are made. But despite all cautions about
human fallibility, not one of all the strange and unexpected events that
happened in the 20th century violated conservation of momentum. Reality -
we *observe* this, we do not say it a priori - is very constricted in the
kind of surprises it has presented us with. Sometimes we even discover new
and unexpected laws of physics, but the new laws still have the same
character as old physics; they are universal mathematical laws.

I think it is now okay to say that there is something important about a
*fundamental* law of physics needing to work the same way in Quito as in
Stockholm. There is something important about physics being simple math.
We do not necessarily understand *why* it is so, at this point in human
history. But it is not a dictum of philosophy, it is a lesson of experience.

It is a lesser lesson of experience that people don't wake up with blue
tentacles. This rule of thumb is not just a philosophical dictum, and if
you violate it, you may end up in trouble.

All correct theories about reality are necessarily consistent with each
other; imperfect maps may conflict, but there is only one territory. If
you make up a story that "explains" waking up with a blue tentacle, *when
it never actually happened*, there is a discordant note - that story is not
necessarily consistent with everything else you know, let alone consistent
with the territory. Consider how different is the skill of explaining
truth from the skill of explaining falsity! The former skill requires that
the explanation be consistent with every other true theory in our
possession about the universe, so that we may rule out some explanations,
or question some previous theories. The latter skill... well, I'm not sure
what to say; the rules would seem to be arbitrary. We want to train
ourselves to excel at explaining true things, not explaining false things,
for only the former is the strength of a rationalist.

Just because you don't know what the future brings, doesn't mean that
reality itself will throw just anything at you. Just because *you* don't
know *absolutely* that something *won't* happen, doesn't mean that if you
devise a random fiction, it would be theoretically possible for one with
total knowledge of Nature to explain it. A random fiction is most likely
an event that could never be woven into the thread of this our real world.
  If observations alone are cause for explanations, you are less likely to
try and explain the unexplainable.

> Ah but don't you see. No one in all of human history has ever woken up
> with a functioning tentacle in place of their arm - to the best of *my*
> current knowledge only. I didn't forget that that was to the best of
> *my* current knowledge only when I entered into the spirit of your
> hypothetical. I didn't forget that my current knowledge is knowledge
> acquired in a particular way and that ultimately it is provisional
> knowledge only. I didn't have to have considered or devoted mindspace to
> the hypothetical you put before you put it. I thought of it only when
> you invited me to imagine it.

My inviting you to imagine a blue tentacle might or might not be a good
reason to *imagine* a blue tentacle, but it surely was not a good enough
reason to come up with an *explanation* for a blue tentacle. Only a real
observation would be cause for that, and reality is rather unlikely to
present you with that observation.

The measure of your strength as a rationalist is your ability to be more
confused by fiction than by reality. If you are equally good at explaining
any outcome, you have zero knowledge. I presented you with a fiction, an
event that was never part of this our real world. You should not have been
able to explain it. It is a virtue to be able to explain history books,
but only if you are *not* able to explain online webcomics.

A true anecdote:

Once upon a time, I was on an IRC channel when R comes in, saying that his
friend H is having trouble breathing; R needs advice. R says that the
ambulance people came in, checked H out, and left, even though H was still
having trouble breathing. And I look at this and in a fleeting moment of
confusion I think: "What the hell? That doesn't accord with anything I
know about medical procedure. I've read newspaper stories about homeless
people who claim to be sick to get a brief bit of shelter, and the
ambulance crews know they're faking but have to take them in anyway." But
I suppress that fleeting moment of confusion, and say... I forget what I
said, but I think it was something like, "Well, they're the experienced
medics - if they say H doesn't need to visit the emergency room, H must
really not need to visit the emergency room. Trust the doctors."

A bit later R returns to the IRC room, angry. It turns out that H was
making up the whole thing, trying for sympathy, to scam a bit of money,
whatever, and there never was an ambulance.

And I said to myself: "Why the hell did I accept this confusing story?
I'm no better than those apocryphal high school students speculating about
thermodynamics. Next time, I vow to notice when I am confused, and not let
the critical hint of my bewilderment flit by so quickly."

It's really annoying that my mind actually got all the way to the point of
being confused, and I just squashed it and accepted the story. Think of
the glory that would have accrued to me as a rationalist, if I alone on the
IRC channel had said: "This story is so confusing that I may want to deny
the data. How sure are you that your friend's story is true? Were you there?"

Therefore did I devise this saying, to chide myself for having failed to
distinguish truth from falsehood: "Your strength as a rationalist is your
ability to be more confused by fiction than by reality."

> In the recent discussion John C Wright finds
> god, Damien didn't forget all the novels he had read, the movies he had
> seen etc in couching his arguments to John C Wright, quite to the
> contrary, he integrated his understanding of such cultural biases, and
> pointed out that John C Wright had had the sort of experience that 'fit'
> with his culture rather than one that would have 'fit' with a different
> culture.

The logical form of Damien's argument was that since Wright's purported
story, which might be real and might not be, was drastically inconsistent
with experiment, and drastically consistent with known fictions, it was
probably also a fiction.

This doesn't mean we are reasoning from fictions as if they were real
events. It means we are being aware of the probable causes of human
delusion. But it is necessary to first investigate the question of
consistency with science; even true statements will often have some *vague*
resemblance to fiction, because there are so many fictions out there.

> My explanation was only provisional so if it happens I'll be open to
> alternative explanations. And if it happens I won't have to throw away
> all my experiences or forget stuff to explain it. I will only have to
> change my model and I'll only have to change it in certain ways.

If anyone ever wakes up with a blue tentacle, then you were virtuous to
claim in advance that the event was explicable. If the real chain of
events leading up to the blue tentacle matches your given reason, then you
were virtuous to claim that reason as your specific explanation.

If no one ever wakes up with a blue tentacle, then clearly a blue tentacle
wasn't the sort of thing which would ever be woven into reality by a
sequence of events that would constitute an "explanation" of it, and it was
a mistake to claim that a blue tentacle was an explicable event.

What would be your explanation if one day, everyone in the world began
observing that two of something plus two of something made five of something?

>> When you have only a poor explanation, one that doesn't make things
>> ordinary in retrospect, just admit you don't have an explanation, and
>> keep going. Poor explanations very, very rarely turn out to be
>> actually correct.
>
> I don't think that this is right, or that it is a logical conclusion to
> draw from the better parts of your argument in your essay. We have maps
> of the terrain of reality because we need them. Maps have utility. If
> you give me a poor map and I know nothing of you and find that the map
> is wrong then, in that case yes, perhaps I might be better off without
> that map altogether, but if the map I have is one that I have
> constructed myself, then when I find it differs from the terrain I can
> just correct or improve the map.

That is an argument for: "I will sit down and write a story, knowing it to
be fiction, about how a secret organization came into my apartment and
replaced my arm with a blue tentacle. I do not *believe* this has
happened. No, seriously, I don't believe it and I'm not going to act as if
I believed it, because it's a stupid explanation. But maybe the mental
exercise will shake something loose in my mind, and I'll think of a better
explanation."

To say that it can have utility to mentally extrapolate the consequences of
a premise is not the same as believing that premise. One must be careful
here; if you act like you believe something, or if you end up emotionally
attached to the belief, I don't credit you as a rationalist just because
you claim you didn't believe you would win the lottery, you bought the
tickets "for the drama" of it, etc. People with a fragmentary
understanding of the Way sometimes anticipate that they can pass as
rationalists by claiming not to believe the things they anticipate.

>> A gang of people sneaking into your room with unknown technology is a
>> poor explanation. Whatever the real explanation was, it wouldn't be
>> that.
>
> I think you can only establish that it's poor (for others than you) in
> relation to the provision of a better one. "I don't know", whilst a
> fair and honest answer, is not any sort of explanation. My answer shows
> you I don't know but doesn't leave you (or importantly) me merely and
> completely bewildered. It gives me things to check.

That is not an *answer*. It is not something to which Bayesian reasoning
gives a high probability. That is a science fiction story, a tool for
brainstorming an answer. I have sometimes derived interesting ideas from
my attempts to write SF, but I know those stories for fiction, not reality.

If you see something that looks like a poor explanation but is the only
explanation you have, it may take a bit of effort to achieve a state of
mind where you *really* don't anticipate it - rather than claiming to
yourself that you are dutifully skeptical.

> And it is very hard for us as individuals to take other's
> "rationalities" as givens when we don't get to see the others
> observations as our own observations. Second (or more) hand
> "observations" have to be discounted to some extend on first hand ones.

See Robin Hanson and Tyler Cowen's paper on meta-rationality, "Are
Disagreements Honest?" http://hanson.gmu.edu/deceive.pdf

> Progress depends on people (as change agents) being willing to stick
> their necks out to try to explain.

That doesn't require that you bet on, anticipate, or believe a hypothesis,
before it is confirmed. It means that you write science fiction about a
poor hypothesis, to get your mind working on the problem.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST