**From:** Eliezer S. Yudkowsky (*sentience@pobox.com*)

**Date:** Wed May 10 2006 - 15:11:11 MDT

**Next message:**Chris Capel: "Self-modification"**Previous message:**Charles D Hixson: "Re: Changing the value system of FAI"**In reply to:**Ben Goertzel: "Re: Changing the value system of FAI"**Next in thread:**Ben Goertzel: "Re: Changing the value system of FAI"**Reply:**Ben Goertzel: "Re: Changing the value system of FAI"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Ben Goertzel wrote:

*>
*

*> Well, Godel's Theorem shows that for any reasonably powerful and
*

*> consistent formal system, there are some statements that cannot be
*

*> proved either true or false within that system. Furthermore, many of
*

*> the examples of this kind of undecidable statement happen to be
*

*> "meta-statements" that pertain to the formal system as a whole.
*

*>
*

*> So, if we have an AI system that operates via consistent application
*

*> of a formal system (e.g. some variant of mathematical logic, including
*

*> probabilistic logic), then there will be some statements about this
*

*> system that cannot be proved true or false within the system.
*

But the question is what impact this has on *decisionmaking*. What is

the AI prohibited from *doing* as a result of Godel's Theorem? What

changes is it prohibited from making to its own code? Classical

mathematical logic is all about proof, assertion, and believing, rather

than matters of decision theory. Moreover, classical mathematical logic

is about *belief* rather than *anticipation*, in the sense of the

distinction made in _Technical Explanation_ - it isn't about organizing

sensory impressions. We know what happens when a proof system asserts

its own consistency. What if an AI behaves as if a statement that it

proves in Peano Arithmetic has a very low probability of being wrong?

And what happens when the AI rewrites the source code responsible for

behaving as if statements proven in PA have low probabilities of being

wrong? Or rewrites the code that rewrites the code? Does the AI

necessarily have to *believe itself consistent*, in the sense that

causes a formal system to break down, in order for code to rewrite

itself? We know what the Godelian restrictions are - but there's a

difference between knowing that, and being able to say that Godelian

restrictions imply limitations for AIs.

-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence

**Next message:**Chris Capel: "Self-modification"**Previous message:**Charles D Hixson: "Re: Changing the value system of FAI"**In reply to:**Ben Goertzel: "Re: Changing the value system of FAI"**Next in thread:**Ben Goertzel: "Re: Changing the value system of FAI"**Reply:**Ben Goertzel: "Re: Changing the value system of FAI"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:56 MDT
*