**From:** Wei Dai (*weidai@weidai.com*)

**Date:** Sun Sep 11 2005 - 09:29:21 MDT

**Next message:**Nick Hay: "Re: mathematical reasoning"**Previous message:**David Picon Alvarez: "Re: potential technological idea"**In reply to:**Eliezer S. Yudkowsky: "Re: The Relevance of Complex Systems [was: Re: Retrenchment]"**Next in thread:**Nick Hay: "Re: mathematical reasoning"**Reply:**Nick Hay: "Re: mathematical reasoning"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Eliezer wrote:

*> Wei Dai, you are correct. Mathematical reasoning is not usually regarded
*

*> as the domain of probability theory. There are efforts to extend standard
*

*> decision theory to remove the assumption of logical omniscience, under the
*

*> heading of "impossible possible worlds". Figuring out exactly how to
*

*> integrate this is, in fact, one of the things I'm currently doing some
*

*> work on myself.
*

I'm aware of "impossible possible worlds", but as far as I can tell, there

hasn't been much work done in that direction beyond setting up some

formalism. It doesn't seem to offer any insight into how informal

mathematical reasoning actually works (i.e., how we come to believe some

unproved mathematical conjectures but not others).

Another direction, taken by Marcus Hutter's AIXI-tl and Schmidhuber's Godel

Machine, is to essentially hard wire difficult-to-prove/unprovable

mathematical conjectures into the AI as unchangeable assumptions. This may

not be obvious at first glance so I'll explain. The way those AIs work is to

start with an initial policy (handcrafted program for making decisions) and

then systematically search for provable improvements to the policy. The

initial policy necessarily makes implicit or explicit assumptions about the

truth value of mathematical conjectures. Such an assumption can't be changed

unless it can be proved to be false, otherwise it's impossible to prove that

changing it won't lower expected utility.

I'm curious what others think about how important this problem is. I know

that in my own field of cryptography, heavy use is made of informal

mathematical reasoning, since it has not be possible to invent ciphers that

can be proven to be computationally hard to break (this would require P!=NP

at a minimum). Likewise in cryptanalysis, many important algorithms can not

be proven to be correct and/or only have heuristic running times.

One might argue that the problem isn't important, because an AI with much

greater computational resources than we do will be able to easily prove or

disprove any mathematical conjectures that might be practically relevant.

But it's not obvious this will be the case. Gregory Chaitin has argued that

many interesting or natural mathematical questions may be independent of the

usual axioms (see http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html). He

advocates that mathematics should be "quasi-empirical" and done more like

physics. I think this is already the case to a great extent, especially in

applied math, but we don't seem to know how to formalize the process so an

AI can also do this.

**Next message:**Nick Hay: "Re: mathematical reasoning"**Previous message:**David Picon Alvarez: "Re: potential technological idea"**In reply to:**Eliezer S. Yudkowsky: "Re: The Relevance of Complex Systems [was: Re: Retrenchment]"**Next in thread:**Nick Hay: "Re: mathematical reasoning"**Reply:**Nick Hay: "Re: mathematical reasoning"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:52 MDT
*