RE: [agi] A difficulty with AI reflectivity

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Oct 21 2004 - 19:52:37 MDT


Hi,

About the comment that humans are doing something different from Godel
machines and other theorem-proving-type AI systems.

Indeed, we are!

Godel's theorem discusses the possibility (or otherwise) of systems of logic
that are both complete and consistent and powerful...

However, humans would seem NOT to be logically consistent, so the
applicability of this theorem to humans is a bit suspect...

We have sophisticated, nonlinear attention-allocation systems that allow us
to manage our inconsistencies, in a way that works OK given the environments
we've evolved for...

But the problems faced by an inconsistency-embracing intelligence
architecture are quite different from the ones faced by a
mathematically-consistency-based intelligence architecture

Mathematical consistency would seem to require massively more computational
resources than human-cognition-like strategies for
inconsistency-embracing...

The way the human mind deals with "This sentence is false", on an initial
intuitive basis, is to embrace the inconsistency, in a manner similar to how
we embrace many of our internal inconsistencies. Of course this is
different than how a fully consistent AI system would experience such a
paradox.

Whether consistency is "better" in an AI system is a complicated question,
unless one assumes infinite or essentially infinite resources. Consistency
is better "all else equal" but the computational cost of maintaining
consistency is probably very high, and the effort spent on this may be
better spent on other things (as is my suspicion).

Novamente, as you may have guesses, is not guaranteed to be consistent,
though it can strive for consistency in particular domains when this is
judged important by it.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Michael
> Roy Ames
> Sent: Thursday, October 21, 2004 9:01 PM
> To: sl4@sl4.org
> Subject: Re: [agi] A difficulty with AI reflectivity
>
>
> Eliezer wrote:
>
> > Schimdhuber's original claim, that the
> > Godel Machine could rewrite every part
> > of its own code including tossing out
> > the need for a theorum-prover, I would
> > consider wrap-around reflectivity.
>
> Fair enough, thank you.
>
>
> > A human's ability to have this kind of
> > email conversation - reflect on different
> > possible architectures for reflectivity -
> > I would consider an even higher kind of
> > wrap-around, and the kind I'm most
> > interested in.
>
> A "higher kind? Different, certainly, as we can rewrite little of our
> cognitive processes. But use of the word 'higher' leads me to suspect
> phantoms. As neither of us know yet how humans do reflectivity,
> the best we
> can do is discuss our guesses.
>
>
> > We are algorithms, but I don't think we're
> > doing the same sort of thing that a
> > reflective theorem-prover would do. For
> > example, humans can actually be temporarily
> > confused by "This statement is false",
> > rather than using a language which refuses
> > to form the sentence.
>
> The human experience of confusion manifests when we encounter
> data that has
> very poor correspondence with our internal model of that part of
> the world.
> A theorum prover decides on the truth or falsehood of a
> statements based on
> its axioms, and that is all it can do. Humans can additionally
> decide if the
> statement fits their current model of reality. If a theorem prover were
> complex enough to contain a non-trivial model of reality against which it
> could decide 'does it fit' questions, then it would come much closer to a
> human's abilities in wrap-around reflectivity.
>
>
> > We're doing something else.
>
> Agreed, though I would phrase it: "We're doing more". What is it
> you think
> we are doing?
>
>
> Michael Roy Ames
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT