A difficulty with AI reflectivity

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Oct 20 2004 - 00:13:54 MDT

Um...uh...fascinating, fascinating. This could be the
key to the entire FAI puzzle by god!

Could you not have full wrap-around reasoning if you
give up the idea that you can achieve certainty in
anything? Why would the Godel machine need to
establish with certainty that a new proof system is,
in fact, consistent? Surely all that is required *for
practical purposes* is that the machine can establish
consistency with *some suitably high degree of
probability* ?

How to avoid towers-of-meta problem? Um.. what about
*partitioning* a language into two - say your full
language is A. Break it up and cleverly redefine it
to make it look as if two sub-languages are present -
call them B and C. Make it so that B can serve as the
meta-language for C, and C can serve as the
meta-language for B.

I imagine that the partitioning could not be total. B
and C would 'bleed' into each other to a small degree
(since B and C are in actuality both just sub-sets of
A, redefined in a clever way to make them *appear* to
be fully self-contained sub-languages). The trick is
create the *illusion* that B is the meta-language for
C, and C is the meta-language for B. It doesn't need
to be perfect, just *good enough* for all *practical*

Am I making any sense here?


"Live Free or Die, Death is not the Worst of Evils."
                                                    - Gen. John Stark

"The Universe...or nothing!"

Please visit my web-sites.

Sci-Fi and Fantasy : http://www.prometheuscrack.com
Mathematics, Mind and Matter : http://www.riemannai.org

Find local movie times and trailers on Yahoo! Movies.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT