RE: Is complex emergence necessary for intelligence under limited resources?

From: Ben Goertzel (
Date: Wed Sep 21 2005 - 18:44:33 MDT


To an extent I feel you're attacking a straw man in these long emails.

Basically no one on this list (Stephen Reed of Cycorp may be an
exception, but I'm not even sure about that, because I'd hesitate to
identify his own views with those of Doug Lenat) advocates the GOFAI
approach that denies the importance of symbol grounding. I think
all the AI folks on this list agree with you that symbol grounding
is an important problem and that GOFAI sucks. We may not agree with
you (or each other) on the best solution to the symbol grounding
problem, though.

> The boot, I submit, is on the other foot. The rest of the community is
> asking the non-complex AGI folks [apologies for the awkward term: I am
> not sure what to call people who eschew complexity, except perhaps the
> Old Guard] why *we* should go along with what looks like their blind
> faith in being able to build a fully capable, grounded AGI without
> resorting to complexity.

The boot is on both feet ;-)

Personally I am happy to entertain ANYONE's viable-sounding, reasonably
specific ideas about how to make an AGI, whether they involve complexity
or not.

I am less interested in generalities about what kind of system an AGI
should or could be. Because I don't think we know enough to make
such generalizations reliably, at this point.

I think there may be many kinds of possible AGI's, with
varying degrees of reliance on complex dynamics and emergent

> In closing, let me say that I look very negative when presenting this
> argument, even though I actually do have concrete suggestions for what
> we should do instead.

Well, I am more curious to hear them than to hear philosophizing about the
power of complexity ;)

I have nothing against philosophizing about complexity -- there is a
chapter on complexity in the philosophy-of-mind-book I'm nearly done
writing -- but I've heard and done a lot of it already...

? Right
> now, my goal is to suggest that here we have an issue of truly enormous
> importance, and that we should first of all accept that it really is an
> issue, then go on to talk about what can be done about it. But I want
> to get to first base first and get people to agree that there is an issue.

I agree that symbol grounding is a key problem in AI, that complexity is
a key aspect of human intelligence and MAY (or may not) be a necessary
aspect of AI under limited resources.

You seem more confident that complexity is a necessary aspect of AI
but I'm not convinced by your arguments...

-- Ben

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT