Re: [agi] A difficulty with AI reflectivity

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Oct 22 2004 - 07:15:37 MDT


Ben Goertzel wrote:
>>Such a system will likely miss a lot of opportunities for improvement
>>(i.e., those that can't be proved using the built-in axioms). I think
>>Schmidhuber acknowledged this in his paper.
>
> Yeah, this is a limitation of the theorem-proving-based approach to AI.
>
> A paraconsistent / inconsistent AI system doesn't have this kind of
> limitation -- it can be driven, via chance or via coupling with the external
> world, into states of mind that are in no way predictable based on its
> initial axioms. On the other hand this also carries with it significant and
> obvious risks...

(Paraconsistent: A reasoning system that does not prove everything if a
contradiction is proved within it.)

I view paraconsistency as a desirable property of an AGI for obvious safety
reasons. Also, humans are paraconsistent in a way that seems to reflect
important structure in our reasoning, such as considerations of relevance;
if 2 + 2 = 5 this does not seem to imply that Gertrude Stein is Queen of
England.

Inconsistency, on the other hand, seems to me undesirable in the sense that
it should always indicate a mistake of some kind; I can't see a situation
where a sane AI *should* believe P and not-P. Inconsistent systems may be
able to prove much more than consistent systems, why, some of them can even
prove everything. But I don't call that power, or additional work being
accomplished, unless you can show me that an inconsistent system somehow
tends to prove *useful* things, true things, preferentially. Easy enough
to prove the consistency of PA if we use inconsistent logic, but it's also
easy enough to prove that your momma is consistent, and there's no obvious
reason why one proof would be favored over the other.

How a formal system adjoined to expected utility decides which proofs to
try and prove is another question I didn't see addressed in the Godel Machine.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT