From: Michael Wilson (firstname.lastname@example.org)
Date: Tue Oct 25 2005 - 13:59:02 MDT
Russell Wallace wrote:
> It's the other way around; nobody will understand the reasons
> _until_ you give the details.
Generally, yes. Loosemore seems to have identified a set of moderately
obvious things about intelligence which he falsely claims can only be
compreheneded in the context of complex systems theory. He appears to
be mistaking a different understanding of how these things work for a
lack of comprehension of them. Disregarding the fact that Loosemore
seems to refuse to acknowledge the existence of alternate models of
these features, he isn't going to convince anyone of the validity of
his own model until he details it sufficiently to be able to show
how it can reliably achieve interesting competencies.
Richard Loosemore wrote:
> And this is not entirely my fault, because I have looked back over
> my writing and I see that the information was clearly stated in the
Explaining AI designs is generally very hard. However your explanations
have been particularly content-free, largely due to making isolated
unsupported claims as if they were axioms, and this hasn't been helped
by the fact that you don't seem to have tried to adapt your dismissals
and defences to take acount of other people's actual positions. It is
very easy to loose track of the inferential distance between author and
reader in the heat of the argument, but the solution is generally
providing more detail to fill in the gaps, not moaning that no one has
been able to replicate your grand understanding given what you think is
an adequate set of generic, isolated statments.
> Here is the hypothetical. Imagine that cognitive systems consist of a
> large number of "elements" which are the atoms of knowledge
> representation (an element represents a thing in the world, but that
> "thing" can be concrete or abstract, a noun-like thing or an action or
> process .... anything whatsoever). Elements are simple computational
> structures, we will suppose.
Sounds like the 'active symbols' paradigm to me. At a guess, I'd probably
take a more complicated, context-dependendent view about splitting cognitive
functionality between code and data, or at a higher level of abstraction
content and structure. In my experience 'active symbol' designs tend to
have counterproductively strict restrictions on the scope of statically
defined computational structures (i.e. mechanisms implemented in code),
though you may be an exception.
> The most important aspect of an element's life is its history...
> Notice one important thing: the specific form of the final dog-element
> is a result of (a) a basic design for the general form of all elements,
> and (b) the learning mechanisms that caused the dog-element to grow into
> the adult form, as a result of experience.
You have a point, albeit a very vague one, regarding something that the
majority of academic approaches to AI (e.g. classic symbolic and
connectionist AI) do not handle well or at all. But you haven't really
done anything more than recognise the problem. All of the minimally
credible AGI designs I've seen already pose potential solutions to the
problem of progressive generation and refinement of representations and
their supporting category structures. I may not agree with most of them,
but they've gone further than just asking the question, which is another
thing all serious AGI researchers are already very familiar with.
> But now along comes a complex systems theorist to spoil the party. The
> CST says "This looks good, but in a real system those adult elements
> would have developed as a result of the learning mechanisms interacting
> with the real world, right? And the system would recognise real-world
> patterns as a result of the recognition mechanisms (which also got
> developed as a result of experience) operating on raw sensory input,
> The cognitive scientist agrees, and says that the learning mechanisms
> are a tough issue, and will be dealt with later.
You keep going on about this mistake like a stuck record. Everyone here
recognises that it was a mistake. No one intends to repeat it. Your ideas
on how to avoid it are one model out of many, and right now that model
hasn't had much content described.
> I do understand that while the content changes during development, the
> structure of the elements does not.... but having said that, doesn't the
> element structure (as well as the structure of the "thinking" and
> "reasoning" mechanisms) have to be chosen to fit the learning
> mechanisms, not the other way around?"
This sounds like a dangerous (i.e. likely to lead to incorrect inferences)
oversimplification, though to be fair those tend to be ten a penny in any
concise attempt to discuss AGI. This point isn't quite as widely accepted
as the one above, but I suspect most people here are aleadyt aware of and
in basic agreement with it.
> Why? Because all our experience with complex systems indicates that if
> you start by looking at the final adult form of a system of interacting
> units like that, and then try to design a set of local mechanisms
> (equivalent to your learning mechanisms in this case) which could
> generate that particular adult content, you would get absolutely
You've said this about ten times now. Stop repeating it based on the
idea that people don't understand your point; actually it's quite
familiar and old hat. The criticisms are of your answers to the
question, not the existence of the question itself (though there are
other, possibly better, ways to put it).
> So in other words, by the time you have finished the learning
> mechanisms you will have completely thrown away your initial presupposed
> design for the structure and content of the adult elements.
This is an invalid inference. It assumes that you can't design learning,
inference and representational mechanisms /together/, such that they
can be shown to work well together at all stages of design. I think this
is perfectly possible, just hard, and indeed that's what I've been
trying to do (with due respect for the dangers of under and over
> The development environment I suggested would be a way to do things in
> that (to some people) "backwards" way.
I wouldn't classify it as backwards, I'd classify it as over-focusing on
a different facet of the problem than the people you criticise. But to
solve the problem, you'd need to consider the whole problem, and you
can't do that without building a model of the whole problem. Trying to
buld an AGI by fiddling with local dynamics is comparable to Michelangelo
trying to paint the roof of the Sistine Chapel while looking through a
cardboard tube held six inches away from the plaster.
> And it would not, as some people have insultingly claimed, be just a
> matter of doing some kind of random search through the space of all
> possible cognitive systems ... nothing so crude.
Trial-and-error is not the same thing as exhaustive search or random
walk. However the actual improvement in efficiency of the former compared
to the latter is strongly dependent on the available understanding of the
domain. The cognitive design space is so huge and so full of failure that
the modest improvements aren't going to significently raise the chance of
building an AGI that way.
> All power to you if you do: you will never get what I am trying to
> say here, and there would be no point me talking about the structure
> of the development environment.
If you honestly believe that anyone who disagrees with you must be
incapable of comprehending your cognitive model, you are engaging in
preaching rather than discussion, and what you are doing will resemeble
religion more than science.
* Michael Wilson
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT