Re: Human mind not Turing computable according to Eliezer?

From: Bill Hibbard (
Date: Fri Oct 08 2004 - 09:21:11 MDT

Hi Christian,

> First, why do I think that your refutation attempt is flawed:
> Your wrote that the construction of Penrose can not
> be performed since the human mind is not a Turing machine,
> but a finite state machine.
> However, every finite state-machine can be modelled by
> a Turing machine, so I think that the the construction
> of Penrose can be still performed.

But his argument depends on a property of Turing machines, namely
the ability to do arithmetic with arbitrary integers, not shared
by finite state machines. So his argument fails by assuming a
capability that humans do not have.

> More concretely, you wrote:
> > Here is where the argument breaks down. With Turing machines,
> > we said there must be some integer k such that the Turing machine
> > TM_k will give the same answer to the question encoded by n that
> > TM_b gives to question Q2. The integer k exists because we can
> > construct a Turing machine TM_x that converts any positive integer
> > n into the index of question Q2, and we can combine TM_x and
> > TM_b to get TM_k. But there is no finite state machine that can
> > convert an arbitrary integer n into the index of question Q2'.
> In fact, for Penrose's argument to work, it is irrelevant whether TM_k
> is a finite state-machine. The only important point is whether the
> reader can be modelled by a Turing machine TM_b or not. Everything
> else is irrelevant.

The problem is that his argument depends on assuming an ability
in his Turing machine model not possessed by the human brains
being modeled.

> You don't solve anything by answering that his argument is flawed
> because the human brain is an even more restricted type of Turing
> machine. Then, he could ask: "How comes that an even more
> restricted type of Turing machine can solve such a hard problem?"
> Does not it really show that the human brain in fact is *not* a finite
> state machine?

This is the funny thing about infinite sets: they have
very different properties than finite sets. By assuming
that humans can do things that require infinite sets of
states, he has moved his argument into a realm irrelevant
to human brain behaviors.

I would note that some of the most interesting approaches
to AI, such as those by Pei Wang and James Rogers, are
based on the very explicit assumption that brains have
finite capacities and avoid the implicit infinite states
assumed in things like traditional stack recursion and
list processing.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT