Re: [agi] A difficulty with AI reflectivity

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Thu Oct 21 2004 - 19:01:22 MDT


Eliezer wrote:

> Schimdhuber's original claim, that the
> Godel Machine could rewrite every part
> of its own code including tossing out
> the need for a theorum-prover, I would
> consider wrap-around reflectivity.

Fair enough, thank you.

> A human's ability to have this kind of
> email conversation - reflect on different
> possible architectures for reflectivity -
> I would consider an even higher kind of
> wrap-around, and the kind I'm most
> interested in.

A "higher kind? Different, certainly, as we can rewrite little of our
cognitive processes. But use of the word 'higher' leads me to suspect
phantoms. As neither of us know yet how humans do reflectivity, the best we
can do is discuss our guesses.

> We are algorithms, but I don't think we're
> doing the same sort of thing that a
> reflective theorem-prover would do. For
> example, humans can actually be temporarily
> confused by "This statement is false",
> rather than using a language which refuses
> to form the sentence.

The human experience of confusion manifests when we encounter data that has
very poor correspondence with our internal model of that part of the world.
A theorum prover decides on the truth or falsehood of a statements based on
its axioms, and that is all it can do. Humans can additionally decide if the
statement fits their current model of reality. If a theorem prover were
complex enough to contain a non-trivial model of reality against which it
could decide 'does it fit' questions, then it would come much closer to a
human's abilities in wrap-around reflectivity.

> We're doing something else.

Agreed, though I would phrase it: "We're doing more". What is it you think
we are doing?

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT