Re: Non-black non-ravens etc.

From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Sep 12 2005 - 18:22:00 MDT


Ben Goertzel wrote:
>
> Richard,
>
>
>>In my post on the relevance of complex systems, I set out the reasons
>>why it is extremely questionable to assume that anyone can build a valid
>>AGI by starting with the observation of logical reasoning at the
>>extremely high level the we know it, then using this as the basis for
>>the lowest level mechanisms of an AGI.
>
>
> I don't think that logical reasoning can serve as the sole basis for an AGI
> design, but I think it can serve as one of the primary bases.
>
> I think that emergence and complex dynamics are necessary aspects of
> intelligence given limited computational resources.
>
> I think that it is quite feasible (and in fact a good idea) to give logic a
> more primary role in an AGI than it has in humans. But that doesn't mean I
> advocate GOFAI. It means I advocate AGI systems that intelligently couple
> logical inference with complex, self-organizing dynamics.
>
> In the human mind, arguably, abstract logical reasoning exists ONLY as a
> high-level emergent phenomenon. However, I suggest that in an AGI system,
> logical reasoning may exist BOTH as a low-level wired-in subsystem AND as a
> high-level emergent phenomenon, and that these two aspects of logic in the
> AGI system may be coordinated closely together.
>
> Do you have an argument against this sort of approach? It is not based on
> simulating human intelligence closely, but rather based on trying to combine
> the best of human intelligence with the best of computer
> technology/software -- with the aim of making an AGI that embodies
> creativity and empathy and rationality superior to that of humans.
>
> -- Ben G

You raise an interesting question. If you were assuming that "logical
reasoning" (in a fairly general sense, not committed to Bayes or
whatever) was THE basic substrate of the AGI system, then I would be
skeptical of it succeeding. If, as you suggest, you are only hoping to
give logic a more primary role than it has in humans (but not exclusive
rights to the whole show), then that I am sure is feasible.

Where the real difficulty arises is how to generate and refine the
elementary symbols that the logical reasoning component works on. If
some other system did that, and was then smoothly integrated with the
logical part, no problem. It's the grounding of those symbols that is
the sticking point.

Personally, I feel that the "other" part is going to be massive, and
needs a lot more thought than it gets. To put that another way, I think
there are many AI formalisms that look great on paper but which, when
implemented, leave all the really important stuff hidden in the mind of
the programmer (who invented, preprocessed and then interpreted the
symbols that were fed to the formalism). This is of course the
grounding problem itself: recognized and appreciated by many, but still
happening today.

Lastly, you say: "However, I suggest that in an AGI system, logical
reasoning may exist BOTH as a low-level wired-in subsystem AND as a
high-level emergent phenomenon, and that these two aspects of logic in
the AGI system may be coordinated closely together." If it really did
that, it would (as I understand it) be quite a surprise (to put it
mildly) ... CAS systems do not as a rule show that kind of weird
reflection, as I said in my earlier posts. I suppose we could call this
"self-similar" behavior (emergence of a copy of the low level mechanisms
in the highest level emergent behavior), and my understanding is that
this has either never been observed or it only happens under peculiar
circumstances.

Richard



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT