Re: large search spaces don't mean magic

From: Daniel Radetsky (daniel@radray.us)
Date: Wed Aug 03 2005 - 18:41:58 MDT


"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:

> If you read http://yudkowsky.net/bayes/technical.html you will see why you
> should never assign probability zero to anything.

Give me some credit. I was careful to never say "zero probability" just in case
whatever I thought had zero probability actually happened, and I wanted to
believe it.

> The best probability estimate we can get is by analogy to historical cases
> from dogs to Lord Kelvin. End of story! You cannot demand a specific
> account of how an AI could break free from a box, else assign probability
> zero.

But I never did that. This is not about truth, it's about justification. I am
not justified in assigning any particular probability to the existence of
plausible exploits. So if I treat it as something I ought to worry about, I am
not justified in doing so, and therefore paranoid.

> It is like asking for a specific account of how an opponent could win a game
> of Go against you, else assigning probability zero to your loss.

It's more like saying if I don't know of any means by which he could win, I'm
not justified in assuming that he will. However, I do know a means you could
beat me at Go: you could make a series of moves in such a way that you control
more territory.

> If you permit (and you must permit) historical generalizations about similar
> but not identical situations, such as past games of Go, in the absence of
> specific exhibited possible winning moves against you, then you must permit
> historical generalizations about similar failures of physical theory and
> failures of imagination, in the absence of specific exhibited possible
> winning moves against you.

I'll permit historical generalizations, but what's unclear is whether, in the
case of physics, those generalizations should tell me that I don't know what's
going to happen, or that some specific thing is going to happen. I take the
former position.

> I think that's essentially the end of the discussion so far as I'm concerned.

If you want to stop replying, that's fine, but I'm far from conceding defeat.

> You are simply using probability theory incorrectly. If you read up on
> technical rules for assigning probability in the absence of specific support,
> you will probably get a better idea of where your verbal argument goes wrong,
> even though you cannot use these methods to calculate a quantitative
> probability in this case.

I don't think my verbal argument goes wrong. We agree that we are not justified
in assigning any probability to the existence of exploits. I propose the
principle that if we are not justified in assigning any probability to X, we
are not justified in worrying about X. Keep in mind that it might be the case
that we ought to worry about X even if we aren't justified in doing so, but
that's not an argument we can make. Do you disagree with my principle? It seems
like you have to.

> If you believe that some principle of rationality requires you to assign a
> zero probability to something that could actually go ahead and happen, or a
> negligible probability to something that stands a good chance of really
> happening, then whatever you are doing is not rationality.

I'm guessing that you think that (physical) exploits stand a good chance of
happening, and so they form a counterexample to my principle. I'm not sure I
buy that, but in any case it doesn't matter because I'm assigning a zero
justification, not probability. Truth is independent of justification.

> I'm not sure there's anything anyone can say to you beyond that. You appear
> to have leaped to a conclusion and to be using an alleged principle of
> rationality to justify it, which principle accords not with probability
> theory, nor exhibits qualitative correspondence to common sense.

I disagree. I think my principle exhibits plenty of qualitative correspondence
to common sense.

> No one else here agrees with your principle and you have made no case for
> it. If you continue to appeal to the principle, you will convince yourself
> but no one else.

My case is essentially that if you do not use justification of the likelihood
of particular worries, then you cannot adjudicate between real and fake worries.
Why are you worried about exploits and not invisible ninja hippos? I would say
that I'm not justified in worrying about ninja hippos because of my appeal to
historical generalizations about the likelihood of ninja hippo attacks. I
don't think that a historical generalization tells me that exploits are likely.
In fact, I don't think anything tells me that they are likely. So I don't
believe in them. Do you think that historical generalizations do tell you that
exploits are likely, as opposed to just not ruled out? Do you think that the
same principles of maximum entropy tell you that there are exploits but no
ninja hippos? I don't get it.

Daniel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT