Re: [agi] A difficulty with AI reflectivity

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Thu Oct 21 2004 - 15:21:51 MDT


Jeff Medina wrote:
> Eliezer said: "Thank you for your suggested reading, but I'm already
> quite extremely aware that no consistent formal system can prove its own
> consistency. If you're not clear on why this presents a unique
> challenge for reflective theorem provers, I recommend John Harrison's
> "Metatheory and Reflection in Theorem Proving: A Survey and Critique." "
>
> I neither suggested any readings, nor made mention of the fact that no
> consistent formal system can prove its own consistency (which is such a
> fundamental and widely known result, I take that it goes without saying
> on such a mailing list). Perhaps the flurry of replies resulted in my
> name being included in a reply intended for a different commenter. If
> not, I'm baffled as to how any of what you've said applies to my recent
> message.

Jeff Medina wrote:
>
> You won't ever find anything in the literature on how to prove that a
> system of proof is infallibily truth-producing outside of the context of
> said system of proof (or even how to prove that system-of-proof A is
> more consistently truth-producing than system-of-proof B; not outside of
> the context of system-of-proof A, B, or X) because even if someone were
> to come up with something that looked like a proof that "wraps around"
> as you put it, they (or the journal editors/referees) would reject it as
> unsound, due to a reductio ad absurdum. Namely, because it would
> contradict that which the rest of proof theory claims is true; that wrap
> around is impossible -- truth is forever uncertain at its base. We
> cannot avoid inserting potentially unsound assumptions.

If you meant something else by this than that no consistent system can
prove its own consistency, then sorry for the misattribution. I also
acknowledge that you did not suggest any specific literature.

I wish people would realize that I know the basics, but sometimes I do make
embarassing mistakes, as with the recent discussion of fleeing an UFAI at
near-lightspeed. (It's not that I didn't know the math, it's that I
guessed intuitively, instead of taking ten seconds to derive quantitatively
the subjective lifespan of the escapee... stupid of me.)

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:47 MST