Re: Loosemore's Proposal

From: H C (lphege@hotmail.com)
Date: Thu Oct 27 2005 - 15:09:21 MDT


So, basically all you said in that entire paragraph is "I don't think it
will work" because "the local learning mechanisms won't take into account
the level of complexity of the resulting learned content"

Well, it seems pretty obvious that, if you plan to design an intelligent
learning mechanism, it must be a goal driven mechanism using such processes
as to maximize the limited complexity available for computation. Clearly you
have no alternative but to design such a learning mechanism as to operate on
a tightly bounded level of complexity, as opposed to such a mechanism that
operates on an unbounded level of complexity; because this is intractable.

The "generality" of the "cross domain learning" must be implicit in the
learning mechanism, not a feature process of the algorithm (=unbounded
complexity).

--hegem0n
http://smarterhippie.blogspot.com

>From: Richard Loosemore <rpwl@lightlink.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Loosemore's Proposal
>Date: Thu, 27 Oct 2005 13:27:26 -0400
>
>This is good. I look forward to seeing what happens.
>
>Here is my prediction for how things will evolve in the future, though.
>The first set of learning mechanisms may work to the extent that their
>scope is limited, but if they aim for very general (cross-domain) learning,
>or if they are used for a developmentally extended period (i.e. if the
>system is supposed to learn some basic concepts, then use these to learn
>more advanced ones, and so on for a long period of time) it will start to
>bog down. The more ambitious the learning mechanism and the longer it is
>expected to survive without handholding, the more the result will deviate
>from what is expected. And it will "deviate" in the sense that the quality
>of what is learned will just not be adequate to make it function well.
>
>Of course, this is *in no way* meant to be a comment on the quality of
>Novamente, I am just trying to anticipate the way things would go if the
>complex systems problem turned out to be exactly as I have suggested.
>
>I'd be the happiest person around if it did not.
>
>Richard Loosemore
>
>
>Ben Goertzel wrote:
>>
>>Richard,
>>
>>Your comments pertain directly to our current work with Novamente, in
>>which
>>we are hooking it up to a 3D simulation world (AGISIM) and trying to teach
>>it simple tasks modeled on human developmental psychology. The "hooking
>>up"
>>is ongoing (involving various changes to our existing codebase which has
>>been tuned for other things) and the teaching of simple tasks probably
>>won't
>>start till December and January.
>>
>>I agree it is possible that after we teach the system for a while in the
>>environment, then it will reach a point where it can't learn what we want
>>it
>>to. Of course that is possible. We don't have a rigorous proof that the
>>system will learn the way we want it to. But we have put a lot of thought
>>and analysis and discussion into trying to ensure that it *will* learn the
>>way we want it to. I believe we can foresee the overall course of the
>>learning and the sorts of high-level structures that will emerge during
>>learning, even though the details of what will be learned are of course
>>unpredictable in advance (in practice).
>>
>>-- Ben G
>>
>>
>>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT