Re: justification

From: Daniel Radetsky (daniel@radray.us)
Date: Thu Aug 04 2005 - 17:02:49 MDT


On Thu, 04 Aug 2005 15:07:05 -0400
"Michael Vassar" <michaelvassar@hotmail.com> wrote:

> The difference between "exploits" and "ninja hippos" is primarily one of
> specificity and secondarily one of implication. Relevant..."jokeish".

Okay, I'm not really sure what you were trying to say after that first
sentence. Perhaps "Since 'There are exploits' is a very vague claim, and 'there
are ninja hippos' is a very specific claim, the former claim has a higher
probability." I don't want to attack this claim, because I'm not sure its yours.
Care to clarify?

> Distant ninja hippos are of little relevance to our planning
> however, so while the probability may be high, the probability times
> implication (impact on utility of actions) is low. The implications are
> also not well defined, and there is little reason for thinking that they can
> be defined, so the thought experiment fails in a manner parallel to Pascal's
> wager.

Okay, points:

1. Since I take "implication" to be "what would happen if X were the case," I
define the implication of ninja hippos as follows: they will kick all of our
asses with their awesome ninja skills.

2. Pascal's wager is claimed to fail in a number of different ways. See:

http://plato.stanford.edu/entries/pascal-wager/

3. As is probably obvious from a mildly charitable reading of my emails, I
didn't mean distant ninja hippos. I meant ninja hippos hiding in your closet,
or something similarly relevant-if-it-were-the-case.

> Since the Kolmogorov complexity of a god or a ninja hippo which
> wants you to do X (e.g. one which changes the utility implications of any
> particular behavior in any particular way) is roughly constant across the
> space of possible values of X, and since we have no Bayesian valid evidence
> for updating our priors, nor any way of gaining such evidence, our rational
> behavior does not differ from what it would be if ninja hippos did exist.

So if a fear satisfies these three conditions, we ought not to worry about it.
Now, the last two conditions are just to say "We are not justified in
believing in ninja hippos," and furthermore both support my position on
exploits, as we have no valid evidence for exploits, and no way of gaining such
evidence.

Honestly, as might be expected from my pathetic, substandard rationality, I
don't know why the first condition has any more than a trivial impact on the
question, so help me out: suppose that the question about the existence of
exploits satisfied conditions (2) and (3); that is, we have no evidence for
exploits, and no way to gain evidence. But suppose that for all boxed AIs
attempting to find exploits in their boxes who want you to do X (or whatever
the analagous example would be), their complexity is not constant across
possible X. Ought we then to believe in exploits? Why?

Daniel

PS: I didn't really know what you were getting at with the stuff about
alchemists and metaphysics. If it's important, please clarify it for me.
There's no hurry because I'll be out of town for at least three or four days.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT