Re: A difficulty with AI reflectivity

From: Wei Dai (weidai@weidai.com)
Date: Wed Oct 20 2004 - 02:12:54 MDT


On Tue, Oct 19, 2004 at 11:02:27AM -0400, Eliezer Yudkowsky wrote:
> [...] Leaving aside other
> qualms I have about the Gödel Machine, it appears to me that for the Gödel
> Machine to execute a change that switches to a new theorem-prover or
> proof-verifier, the Gödel Machine must prove that if the proof-verifier
> accepts P, then P. [...]

I think you probably misunderstood the paper. The purpose of searching for
new theorem provers is to speed up the theorem proving, not to be able to
prove new theorems that were unprovable using the old theorem prover. So
the Gödel Machine only needs to prove that the new prover will prove a
theorem if and only if the current prover will. I don't think this should
be a problem.

> Lest any despair of AI, remember that humans can think about this stuff
> without going up in flames, ergo it must be possible somehow.

Georg Cantor and Emil Post (who in the 20s proved results similar to
Gödel, Church, and Turing but didn't publish them) went insane. Gödel and
Turing both killed themselves. A rather worrisome correlation between
the diagonalization argument and mental breakdown...



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:47 MST