Re: Non-black non-ravens etc.

From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Sep 12 2005 - 14:43:27 MDT


Michael,

I assumed that most people on this list would not be trapped into any of
the routine misunderstandings that you mention: I certainly wasn't.

My argument hinged on the significance of the word "may" in your
statement "... AIs may implement logical inference as a low-level
mechanism ...".

In my post on the relevance of complex systems, I set out the reasons
why it is extremely questionable to assume that anyone can build a valid
AGI by starting with the observation of logical reasoning at the
extremely high level the we know it, then using this as the basis for
the lowest level mechanisms of an AGI.

So far, I there has not been one coherent response to the central point
that I made in that post and my subsequent follow up. Most responders,
and especially yourself and Eliezer, used "complex system" to refer to
things that were not, in fact complex systems at all, thus rendering
their comments irrelevant.

Richard Loosemore

Michael Wilson wrote:
> Richard Loosemore wrote:
>
>>Interestingly enough, a real thinking system would respond by noting
>>that the question of whether all ravens are black is best answered,
>>not by fabulously complex appeals to probability theory, but
>>to some background understanding of what makes them black - genetic
>>characteristics.
>
>
> Something is seriously broken if you think that probability theory is
> 'fabulously complex' compared to nearly any causal explanation, never
> mind one as sophisticated as genetics. On the contrary, probability
> theory is extremely simple; the only reason humans find it hard is
> because it can be highly counter-intuitive. The total process of a
> human applying probability theory /is/ fabulously complex, but this
> need not be true for an AI, as I will detail below;
>
>
>>Real thinking systems, in practice, obviously cope with the task of
>>gathering new knowledge by little strategies and appeals to connected
>>knowledge... (I) don't yet have a clear reason to believe that Bayesian
>>inference would be necessary or useful or even applicable at that base
>>(local mechanisms) level,
>
>
> I'm not sure that you have a clear idea of how advocates of rational
> cognition are proposing that mechanisms be layered. I have often shot
> down armchair critics of AI for confusing the fact that humans implement
> logical inference as a high level, symbolic process (supported by
> grounded concepts and all sorts of complex lower-level mechanisms),
> while AIs may implement logical inference as a low-level mechanism (and
> indeed, all AIs on current hardware are ultimately based on boolean
> logic). I'm not certain, but it looks like you're making the same kind
> of mistake; an AI can support 'irrational'/'fuzzy'/'intuitive' sorts of
> cognition as a layer above a logical substrate, and then possibly even
> another layer of high-level logical inference above that. Your criticism
> would apply if probability theory was being used the way humans use it,
> as a layer above concepts, grounding etc, or the way that GOFAI might
> use it, i.e. on its own without any of the other necessary complexity,
> but it is inapplicable as a criticism of Bayes as the underlying basis
> for cognition. You are correct that rationally based cognition has not
> been shown to be tractable, but I addressed that in an earlier reply.
>
> * Michael Wilson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT