From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Thu Sep 08 2005 - 09:52:56 MDT
Wei Dai wrote:
>>> >> I bet that if you name three subtleties, I can
>>> >> describe how Bayes plus expected utility plus Solomonoff
>>> >> (= AIXI) would do it given infinite computing power.
> What about mathematics? I don't think we know how to do mathematics
> under a Bayesian framework, even with infinite computing power (since
> some mathematical statements can be neither proved nor disproved from
> the usual axioms). I noticed that nobody replied to my "uncertainty in
> mathematics" post (http://sl4.org/archive/0508/12022.html).
My challenge was meant for Loosemore alone; I didn't mean it as an open
invitation to the whole SL4 list! I'm working on two book chapters that
are due in October right now, and that would be a bit more than I can
Wei Dai, you are correct. Mathematical reasoning is not usually
regarded as the domain of probability theory. There are efforts to
extend standard decision theory to remove the assumption of logical
omniscience, under the heading of "impossible possible worlds".
Figuring out exactly how to integrate this is, in fact, one of the
things I'm currently doing some work on myself.
Now let's see *Loosemore* come up with an example.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT