Re: Loosemore's Proposal

From: H C (lphege@hotmail.com)
Date: Tue Oct 25 2005 - 15:58:16 MDT


>From: Richard Loosemore <rpwl@lightlink.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Loosemore's Proposal
>Date: Tue, 25 Oct 2005 14:53:10 -0400
>
>
>Another way to say what I have been trying to say:

Go ahead, try not to get too frustrated about it.

>
>The question is: how does the *design* of a cognitive system's learning
>mechanisms interact with the *design* of its "thinking and reasoning and
>knowledge representation" mechanism?

I think I smell where you are going with all of this, and something tells me
to follow my nose.

>
>Can you, for example, sort out the thinking/reasoning/knowledge
>representation mechanism first, then go back and find some good learning
>mechanisms that will fill that mechanism with the right sort of data, using
>only real world interaction and (virtually) no hand-holding from the
>experimenter?

Just as easily as you can start with some basic rules for a learning
mechanism and try to retro-fit the representation.

>
>Or is it the case that you can pick a thinking/reasoning/knowledge
>representation mechanism of your choice, and then discover to your horror
>that there is not ever going to be a learning mechanism that feeds that
>mechanism properly?\

Apply the same isomorphism as above.

>
>Now, complex adaptive systems theory would seem to indicate that if the
>learning mechanisms are powerful enough to make the system Class IV (i.e.
>complex-adaptive), the global behavior of those learning mechanisms is
>going to be disconnected from the local behavior .... you can't pick a
>global behavior first and then pick a local mechanism that generates that
>behavior. That is the disconnect.

Everyone [should] agree about this. Of course, global and local are
relative. They aren't REALLY disconnected. But everyone should agree to
that, too.

>
>If this were the case with cognitive systems, we would get the situation we
>have now. And one way out would be to build the kind of development
>environement and adopt the kind of research strategy I have talked about.
>
>Richard Loosemore.

I think the connecting point in this whole dialogue is the paradoxical
situation of "where to start". Actually that kind of sums up artificial
intelligence research since it ever existed.

The reason that this is something we should introspect upon more than what
seems reasonable a priori is that, first of all, lots of people have
"started" in lots of different places. Second of all, they were all wrong,
and, quite obviously, we can look at what they did and try to learn from it
(and this technique quite predicably will return some marginal gains in
understanding).

Another point is that, of the psychological and philosophical writings that
have been done over the past few thousand years, lots and lots of empirical
data exists. Not just "empirical studies", but actually emprical evidence
directly from an introspective point of view; that is, the kind of emprical
evidence that doesn't really exist explicitly in some study (and when it
does it's too vague to draw any useful conclusions).

Third, there can be created a few strong points, that are backed not by any
elaborate explanation or massive study conducted, but recognized to be
important, in some general sense, strictly based on the sort of function
they provide. For example, Bayes probability theorem (and damn you if you
don't believe it, lol).

Finally, these few strong points (of which I can't really offer any further
examples, but they exist profusely), which were derived not from the fact
that strong studies implied their utility explicitly to intelligence, but
the fact that the general function (without respect to intelligence), is
such that it seems in some sense necessary for it to exist. The primary
failure point is not being presumptuous about the utility of your
intuitions, but being presumptuous about the implementation of your
intuition.

Thus, it isn't the fact we need to continue to more carefully select our
intuitions (which is obviously true no matter what point you are arguing, or
on what side you are on), but that we need to be infinitely more cautious
about how our speculations about the nature of the implementation of such an
intuition actually apply.

-- Th3Hegem0n
http://smarterhippie.blogspot.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT