**From:** Wei Dai (*weidai@weidai.com*)

**Date:** Wed Aug 24 2005 - 16:46:06 MDT

**Next message:**Eliezer S. Yudkowsky: "AI-Box Experiment #4: Russell Wallace, Eliezer Yudkowsky"**Previous message:**Michael Wilson: "Re: Complexity tells us to maybe not fear UFAI"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

In order to make decisions, an AI may have to make use of mathematical conjectures which it's not able to either prove or disprove. (For example someone may ask the AI to bet directly on the truth of a conjecture.) Has there been any research on how this should be done? One idea would be to just apply decision theory and treat conjectures as statements about the state of the world to which probabilities can be assigned. We would have to extend standard decision theory in order to remove the assumption of logical omniscience, but supposing that is done, where do we get priors for mathematical statements?

As a specific example, consider Goldbach's Conjecture, which states: every even number greater than 2 can be written as the sum of two primes. Intuitively, it seems that we can gain confidence in this conjecture by verifying that a large amount of even integers can be written as the sum of two primes. But to formalize this idea using Bayes' theorem would require the prior probability that Goldbach's Conjecture is true, and a function f(n) = P(there is at least one even number less than n that can't be written as the sum of two primes | Goldbach's conjecture is false). How does an AI come up with the prior and such a function?

**Next message:**Eliezer S. Yudkowsky: "AI-Box Experiment #4: Russell Wallace, Eliezer Yudkowsky"**Previous message:**Michael Wilson: "Re: Complexity tells us to maybe not fear UFAI"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:52 MDT
*