RE: [agi] A difficulty with AI reflectivity

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Oct 22 2004 - 06:16:06 MDT


> Such a system will likely miss a lot of opportunities for improvement
> (i.e., those that can't be proved using the built-in axioms). I think
> Schmidhuber acknowledged this in his paper. The axioms might also have
> bugs leading to unintended consequences, which the AI would not be able to
> fix. For example the utility function could be defined carelessly. (And
> for a general AI I don't see how it could be defined safely.) This is a
> problem inherited from AIXI which Schmidhuber doesn't seem to address.

Yeah, this is a limitation of the theorem-proving-based approach to AI.

A paraconsistent / inconsistent AI system doesn't have this kind of
limitation -- it can be driven, via chance or via coupling with the external
world, into states of mind that are in no way predictable based on its
initial axioms. On the other hand this also carries with it significant and
obvious risks...

-- Ben



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:47 MST