From: Lee Corbin (lcorbin@rawbw.com)
Date: Thu Jul 03 2008 - 09:32:14 MDT
Evidently the list is back up, John, so I'll copy the list on my reply.
> On Mon, 30 Jun 2008 "Lee Corbin" said:
>
> > highly capable intelligences won't "work" by any formal logic.
>
> It doesn't matter what the word "work" means, if your mind is capable of
> doing arithmetic then there are statements, lots of them, that you will
> never find a proof to demonstrate it to be true and you will never find
> a counterexample to prove it to be false.
Yes. One would have supposed this to be true anyway, even
before Gödel, just because of the finitude of any given human
(mechanical) mind. For *practical* matters, even Hilbert would
have conceded that. But Gödel proved it even in the most
idealistic cases, and that includes even second order logics
and any systems that include an arithmetical subcomponent:
*formal* proofs have their limitations.
> I'm not saying a mind will "work" by formal logic, I am saying that
> formal logic can be used to examine the limitations of any mind that has
> certain properties, like the ability to do arithmetic for example.
Examine in what sense? It makes no sense to me that someone
should examine my mind---even if they have 25th century
technology---and *because* of Gödel's theorem prove that
I have limitations. My only limitations---and thus the limitations
of any mind, when you really come down to it---are only that
there is only so much time and so much effort that can be applied.
Gödel's theorem doesn't really apply because only an idiot would
try to prove everything formally from a fixed system.
> > AIs can be plenty smart without getting hung up on hard problems.
> Yes that is true. A real mind might want to solve a certain problem but
> after working on it for a long time and making no progress it might
> judge that its time could be better spent doing other things and move
> on. However your fictional fixed goal mind can't do that, sooner or
> later it is going to encounter one of those Gödel-Turing problems and
> when it does it's going to be caught up in a loop for eternity.
That's true, but *only* if that mind limits itself to formal proofs.
Nobody does that. In any real model---say, for specifics, the
reality of the actual set of literary critics in a certain year---we
may or may not be able to demonstrate that "all critics admire
only one another". But we won't be using formal proofs to do it.
> I believe that is why Evolution never came up with a fixed goal
> mind, they don't work.
What about a "fixed-goal mind" whose only passion was to find a
scheme that unified GR and QM? We cannot say ahead of time
whether that totally focused individual (be it an AI or not) will
ever succeed. Likewise for Goldbach's conjecture. Now yes,
*if* it turns out that Goldbach's conjecture is true but (most
unlikely) suffers from not having a proof at all, then it follows
trivially that it wouldn't even have a formal proof, and it would
be an example of Gödelian incompleteness/undecidability.
In principle, it seems to me that a Focused mind (e.g. the ones
wonderfully depicted by Vinge in "A Deepness in the Sky"
whether AI or human, might be what you are calling a fixed
goal mind. Yes, I admit that evolution so far as produced
relatively few of these. (Probably certain artists, composers,
or even mathematicians come close to being "fixed goal"
minds, but only an AI could be 100%, I guess.) It's here
that I agree with you that if the thing is highly intelligent---
e.g. able to absorb inspiration from many sources---then
it may start fiddling with the very way that it itself thinks,
and then its behavior is not going to be predictable.
All we can do, it seems to me, is tilt the odds a bit in our favor,
and I don't have any reason to dismiss out of hand the efforts
of the Friendly AI types to do just that. On the other hand, I
admit that in distinction to what I wrote the last time I answered
you on this question, I've read some dubious posts claiming that
it would be possible to permanently tie down certain artificial
but exceedingly intelligent AIs so that it could be predicted with
near 100% probability that there are certain things they will
never do. That's because, to me, being highly intelligent *means*
being open to inspiration from any direction.
Bottom line of our disagreement. I still affirm that Gödel's Theorem
has not the least *practical* effect on the development of AI, nor
what would be any reasonable limit on what the AI could do, and
is thus just as irrelevant for any AI's thinking as it is for ours.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT