From: Michael Vassar (firstname.lastname@example.org)
Date: Thu Aug 04 2005 - 13:07:05 MDT
>*Deep breath* because [justification is] the only way I know of
>distinguishing between good and bad judgments.
People who don't know formal techniques for distinguishing between good and
bad judgements should study rationality more before considering themselves
to be SL4.
The difference between "exploits" and "ninja hippos" is primarily one of
specificity and secondarily one of implication. Relevant Ninja hippos
occupy a very constrained and well defined realm within possibility space,
while "expolits" refers to an extremely large space of possibilities which
has not been well characterized. In fact, it is a space defined so that it
consists entirely of uncharacterized possibilities. If we could
characterize all of the options available to an SI in what is to us an
intractible domain, we would be SIs. Actually, in an infinite universe
there probably are "ninja hippos" to the extent that "ninja hippo" is the
designation of an empirical cluster of phenomena, e.g. something "thingish",
rather than being, as I suspect you at least psychologically meant it to be,
an absurdist juxtaposition of incompatible attributes, e.g. something
"jokeish". Distant ninja hippos are of little relevance to our planning
however, so while the probability may be high, the probability times
implication (impact on utility of actions) is low. The implications are
also not well defined, and there is little reason for thinking that they can
be defined, so the thought experiment fails in a manner parallel to Pascal's
wager. Since the Kolmogorov complexity of a god or a ninja hippo which
wants you to do X (e.g. one which changes the utility implications of any
particular behavior in any particular way) is roughly constant across the
space of possible values of X, and since we have no Bayesian valid evidence
for updating our priors, nor any way of gaining such evidence, our rational
behavior does not differ from what it would be if ninja hippos did exist.
Anyway, the serious question that parallels "ninja hippos", though we are
moving into metaphysics here is, "what is the probability that the
philosophical foundations of our implicit world-view are grossly flawed."
Unfortunately, while the best available calibrated Bayesian answer to that
question seems to be "quite high", it doesn't seem to me that knowing this
helps us much by suggesting a path of action. It seems to me that if we are
fluctuations in the quantum vacuum, or something equivalent, then we are
just screwed, or rather, not even screwed. Then again, it seems to me that
Eliezer once had a roughly parallel attitude towards objective ethics, so
the Bayesian calibrated strength of my assessment that we shouldn't worry
about metaphysics is also not terribly high.
The proper question about medieval alchemists is "what would have been the
well calibrated probability assigned to each of these assertions a) great
wealth can be created by those with deep understanding of how the universe
works, b) wealth in the form of gold can be likewise so created, c) it could
be created from lead d) they could figure out how to create gold from lead
e) they could figure it out by following a particular research program, and
f) this is how you create gold from lead!"?
At a less analytical level, they could have asked "what is the expected
return on investment from alchemy research, and how much of that return can
The proper question about 3 digit primes is "what is the well calibrated
probability that any particular mathematical statement which is true within
a particular mathematical formal system will be false within some other
practically useful system"?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT