From: Kevin Peterson (kevinpet@gmail.com)
Date: Mon Nov 05 2007 - 17:56:36 MST
On 11/1/07, Wei Dai <weidai@weidai.com> wrote:
>
> Suppose an SI wants to do something with a very small, but non-zero
> probability, say 0 < p <= 1/3^^^3 (in Knuth's up-arrow notation). How
> would
> it go about doing this? The answer seems to be "it can't", because
> algorithmic information theory says there is no way that it can generate a
> sufficiently long random number that it can know is truly random. Can
> anyone
> see a way around this limitation?
There is no situation that would call for taking an action based on such a
low probability.
Suppose someone, call him Tim, is evaluating what course of action he should
take. Tim calculates that the correct action is to play a mixed strategy of
A with probability p, and B with probability 1 - p. But wait! p is so small
that Tim cannot randomly choose a number with sufficient precision to
properly play this strategy. Tim's head explodes from thinking about this.
So why isn't this a problem? Because the ability to determine that the best
strategy is to perform some action with probability p guarantees that Tim is
able to calculate with numbers of the same order of magnitude as p.
I don't think the subtleties of randomness come into play at all here,
although they could in different similar situations -- rock-paper-scissors
is not completely random, but it's generally good enough for children who do
not have access to better methods of settling trivial disputes. The analogy
here would be that Tim might determine that the correct probability to play
some portion of a mixed strategy is very low, so low that the cost of not
playing that strategy was lower than the cost of accurately calculating that
cost.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT