Re: Meta: Minds, Machines and Gödel

From: Jef Allbright (jef@jefallbright.net)
Date: Sun Feb 25 2007 - 13:41:38 MST


On 2/25/07, Mohsen Ravanbakhsh <ravanbakhsh@gmail.com> wrote:
> What's wrong with this argument?!!! If it's true, making a (supper)human is
> impossible!
>
>
>
> Minds, Machines and Gödel is J. R. Lucas's 1959 philosophical paper in which
> he argues that a human mathematician cannot be accurately represented by an
> algorithmic automaton. Appealing to Gödel's incompleteness theorem, he
> argues that for any such automaton, there would be some mathematical formula
> which it could not prove, but which the human mathematician could both see,
> and show, to be true.
>
> The paper is a Gödelian argument over mechanism.

As it seems I'm always saying, it's a matter of context.

Lucas applies different standards to the machine and the human,
apparently with the implicit assumption that while machines, by
definition, must be consistent, humans are somehow exempt (else how
could they have "free-will" which is "obvious" beyond dispute?)

Lucas argues

    Godel's theorem states that in any consistent system which is
    strong enough to produce simple arithmetic there are formulae
    which cannot be proved-in-the- system, but which we [standing
    outside the system] can see to be true.

    Godel's theorem must apply to cybernetical machines, because
    it is of the essence of being a machine, that it should be a concrete
    instantiation of a formal system. It follows that given any machine
    which is consistent and capable of doing simple arithmetic, there
    is a formula which it is incapable of producing as being true -- but
    which we can see to be true. It follows that no machine can be a
    complete or adequate model of the mind, that minds are essentially
    different from machines.

Lucas is correct in pointing out that the consistency of a system can
be proved only from within a context greater than and encompassing the
system of interest, but he fails to apply the same principle to the
human system. On what basis does he think human knowledge and
certainty of "truth" is warranted, given that both machines and humans
are similarly context-limited?

Another way to look at this is to agree with his statement that "given
any machine which is consistent and capable of doing simple
arithmetic, there is a formula which it is incapable of producing as
being true -- but which we [humans] can see to be true" and extend the
argument by applying a slightly more advanced machine that can prove
the more limited statements that stumped its predecessor, and imagine
doing this recursively as far as desired. It might then become clear
that that this hypothetical machine could in fact surpass the human in
context of understanding while remaining consistent but limited, and
that the provability of any statement, by human or machine, is a
matter of context.

Of course, given that your context and mine are necessarily disjoint
to some extent, the foregoing "proves" nothing. ;-)

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT