Re: Non-black non-ravens etc.

From: Chris Capel (
Date: Mon Sep 12 2005 - 18:54:32 MDT

On 9/12/05, Richard Loosemore <> wrote:
> Ben Goertzel wrote:
> > I don't think that logical reasoning can serve as the sole basis for an AGI
> > design, but I think it can serve as one of the primary bases.
> You raise an interesting question. If you were assuming that "logical
> reasoning" (in a fairly general sense, not committed to Bayes or
> whatever) was THE basic substrate of the AGI system, then I would be
> skeptical of it succeeding. If, as you suggest, you are only hoping to
> give logic a more primary role than it has in humans (but not exclusive
> rights to the whole show), then that I am sure is feasible.
> Lastly, you say: "However, I suggest that in an AGI system, logical
> reasoning may exist BOTH as a low-level wired-in subsystem AND as a
> high-level emergent phenomenon, and that these two aspects of logic in
> the AGI system may be coordinated closely together." If it really did
> that, it would (as I understand it) be quite a surprise (to put it
> mildly) ... CAS systems do not as a rule show that kind of weird
> reflection, as I said in my earlier posts.

I'm not sure I can reconcile these two opinions. If you think it's
feasible to use some sort of logical reasoning, (whether rational
probability analysis or something else,) as part of the basic
substrate of a generally intelligent system, and given that any
successful AI project would necessarily result with a system that
*does* exhibit logical reasoning at a high level, how could you find
it unlikely that a system would combine both features? I probably
misunderstand you.

Oh, and do fractal patterns not emerge in many complex systems? (Curious.)

Chris Capel

"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT