**From:** Wei Dai (*weidai@weidai.com*)

**Date:** Fri Oct 22 2004 - 05:04:46 MDT

**Next message:**Ben Goertzel: "RE: [agi] A difficulty with AI reflectivity"**Previous message:**Eliezer Yudkowsky: "Re: Donate Today and Tomorrow"**In reply to:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Next in thread:**Ben Goertzel: "RE: [agi] A difficulty with AI reflectivity"**Reply:**Ben Goertzel: "RE: [agi] A difficulty with AI reflectivity"**Reply:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Thu, Oct 21, 2004 at 04:10:57AM -0400, Eliezer Yudkowsky wrote:

*> One way that might succeed in salvaging the Godel-Machine formalism would
*

*> be to isolate the rewriting of the theorem verifier from the rewriting of
*

*> the rest of the Godel Machine. [...]
*

I don't think this is necessary. Everything you describe can be

accomplished by carefully choosing the axioms built into the initial

theorem verifier, without changing the formalism itself. The utility

function would be built into the axioms. The verifier will only accept a

proposed rewrite if it comes with a proof that expected utility is

increased, and the axioms could be chosen so that the only way to prove

that a rewrite of the verifier increases expected utility is to prove that

the new verifier accepts exactly the same set of proofs that the old

verifier does, but faster. Even the claim that the verifier can be

rewritten out is correct -- it will do that when a proof appears that no

further self-optimization is likely to be found, therefore a verifier is

no longer needed. At that point the AI will rewrite itself to remove the

proof searcher/verifier and disallow any further modifications.

Such a system will likely miss a lot of opportunities for improvement

(i.e., those that can't be proved using the built-in axioms). I think

Schmidhuber acknowledged this in his paper. The axioms might also have

bugs leading to unintended consequences, which the AI would not be able to

fix. For example the utility function could be defined carelessly. (And

for a general AI I don't see how it could be defined safely.) This is a

problem inherited from AIXI which Schmidhuber doesn't seem to address.

**Next message:**Ben Goertzel: "RE: [agi] A difficulty with AI reflectivity"**Previous message:**Eliezer Yudkowsky: "Re: Donate Today and Tomorrow"**In reply to:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Next in thread:**Ben Goertzel: "RE: [agi] A difficulty with AI reflectivity"**Reply:**Ben Goertzel: "RE: [agi] A difficulty with AI reflectivity"**Reply:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:49 MDT
*