Re: [sl4] Re: More silly but friendly ideas

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Mon Jun 30 2008 - 10:32:26 MDT


> So regarding mind your objections would be correct if Mr. Jupiter Brain
> worked according to first order logic, but Babbage couldn't even make
> his Analytical Engine if he used that. You also say that Gentzen came up
> with a system that could do arithmetic that was consistent and complete,
> and that's true, but Gentzen's system needs an infinite number of
> symbols; so unless you're postulating an infinite and not just
> astronomically large mind Gentzen is irrelevant. For any mind you
> actually expect to build Gödel's Incompleteness theorem is very relevant
> indeed.

I still don't see why. Godel type propositions are rare, and even the
crudest of cut-offs would solve an infinite loop problem. Please give
a specific example of something a goal orientated AI would fail at,
because of Godel-type considerations.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT