Re: Changing the value system of FAI

From: Ben Goertzel (ben@goertzel.org)
Date: Wed May 10 2006 - 19:53:06 MDT


> What I'm saying is, tell me something I can't *do* because of Godel -
> not something I can't *believe*, or can't *assert*, but something I
> can't *do*. Show me a real-world optimization problem I'll have trouble
> solving - like a difficult challenge that causes me to need to rewrite
> my own source code, which I can't do because of Godelian consideration
> XYZ... This is the sense in which I know the Godelian restrictions, but
> I can't yet say that they imply limitations for AIs.

Well, because of Godel's Theorem like results, there are various
theorems you cannot prove using any formal system that you can
represent within your finitely bounded memory. If writing down proofs
on a piece of paper constitutes a kind of "action", then Godel's
Theorem implies there are certain acts you cannot do.

A system with a bigger memory might be able to prove these theorems
using its formal system, but you can't do it....

However, I agree that this kind of "action" is not very pragmatically
interesting, and I don't know for sure whether or not there are any
important practical limitations on AI posed by Godel's Theorem and its
relatives.

I suspect that there are, however. For instance, I *suspect* (but
have not proved) that Godel-type restrictions (appealing to Chaitin's
algorithmic information based variant of Godel's Theorem) can be shown
to imply that a system with memory capacity M cannot prove or disprove
the Friendliness of most AI systems with memory capacity > N.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT