From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 19 2002 - 19:46:52 MDT
> I guess my basic thesis is: "If you don't know how to describe how it
> emerges, what it does, why it's there, how it contributes to general
> intelligence - if you do not, in short, know fully what you are doing and
> why - you will not succeed."
Sure, there is some truth in this. And I know how to describe a lot more
than I wrote in the manuscript I gave you, just as you know how to describe
a lot more than you have written down at all....
But I think your statement is a big overstatement, which is a genuine
disagreement between us. I do believe in emergence in a deeper sense than
you do.
Time and experimentation will tell!
> > And, some of the others who read it -- who were more a priori
> sympathetic
> > than you to my overall philosophy of mind -- seemed to be more
> willing to
> > "fill in the gaps" themselves and have had a more positive
> assessment of the
> > design than yourself.
>
> I did fill in the gaps.
yes, but you filled them in differently than others did ;)
of course, that so many different ways of filling in the gaps are possible
is a weakness in the manuscript...
> This makes me suspicious of a claim that a higher-level behavior emerges
> automatically because over here it sure as heck looks like a
> small target in
> design space.
Well maybe your intuition just ain't good enough to design an AGI ;>
> But I think it would take a concrete demonstration or at
> least a fully visualized walkthrough to convince me that there is a free
> lunch here. Sure, there might be free lunches on some behaviors,
> but *all*
> of them?
It is not the case that the Novamente design expects *all* higher-level
organizational structures & dynamics to emerge for free, just some of them
> I don't think this is enough to explain it. Under DGI it's
> pretty clear why
> you have to go through chimpanzees in order to incrementally evolve human
> intelligence (and I spend time discussing it in the paper).
Sure, and this is not a very original point, as you know...
> Given the
> Novamente theory in which all higher levels of cognition emerge naturally
> from a small set of lower-level behaviors, there is no obvious (to me)
> reason why the Novamente behaviors would not be incrementally
> evolvable, nor
> any obvious reason why spider brains would not incrementally
> scale to human
> size and capabilities. Is there a reason why the Novamente design - not
> just as it is now, but for all plausible variations thereoff - is
> unevolvable?
Yeah, there are some detailed parts of it that seem to me to probably be
unevolvable, for instance the combinatory-logic-based schema module. Other
parts like attention allocation are probably easily evolvable...
ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT