From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Wed Jan 01 2003 - 08:24:50 MST
Dear SL4,
Ben wrote,
>
> Well, it's true that Michael's tests do not explicitly seek to
> distinguish learning from metalearning etc.
>
Correct. Learning new methods of learning is quite an advanced topic
from a "external game playing" POV. But IMO, any AGI worth its salt is
going to have internal mechanisms of doing this right from the start...
else it is not going to get very far. Learning about new ways of
learning is no different from any other kind of learning. It is the
using of that information to modify cognition that is the trick. I
imagine it will entail a combination of heuristics, random guessing,
pattern recognition on stored experience, analogizing, etc... but that
is a discussion for another time (after I see how Eliezer fleshes-out
LOGI ;)
>
> If a system is able to "learn how to learn", then it should be able
> to carry over learning from one test to the next in the test suite.
>
Yeppers. And that is one of the main purposes of the Curriculum: to
provide a base of experience in simple domains, to enable
quicker/faster/cheaper learning in more complex domains. I really have
a lot more game ideas, just haven't had the time to write them into the
doc yet ;)
>
> One could construct a similar regime to test learning how to learn
> how to learn, though it would be a lot more elaborate and
> time-consuming.
>
The basis of this is already laid out in the lesson on 'Understanding
Symbols' and the lesson on 'Algorithmic Fundamentals'. Once the AGI has
learned how to 'follow instructions' i.e. algorithms, and learned how to
associate symbols with objects, then you could tell ver an algorithm,
assign a symbol for the algorithm, then provide that symbol along with
another question. The symbol would be the hint: "here's a way to solve
the problem". The "problem" could itself be a way to solve a problem.
Of course all of my games would be utterly pointless if the candidate
AGI "don't got what it takes" to generalize the given algorithm, or
'draw an analogy' from it to apply to a new situation. My Curriculum
is *about* providing simple opportunities to do that.
>
> The problem with making a test suite for an embodied AI, is that the
> test suite is inevitably very body-dependent.
>
Hmm. I'm not sure I agree with this. I agree that an *interface* has
to be tailored to each embodiment. I don't agree that the basic lessons
would change. For example: Feeding a string of binary data to a human
is going to result in difficulty and confusion. Providing the same
information as visual characters and words will lead to better uptake.
>
> Indeed, if the "embodiment is necessary for AI" theory is correct,
> [snip]
>
I don't think embodiment is necessary for AGI. But I do think that
real-world input will make symbol-grounding (of the type humans do)
easier.
>
> I do not think Michael's tests are anywhere near adequate as a
> training regime for a baby AI.
>
It depends on the design... but I get your point. Many AGI design types
will require a large quantity of experiential learning. I have not
attempted to describe that because different designs will require
different lessons and learning situations. Instead the Curriculum
focuses on what might be called 'book learning'. Lessons that every AGI
is going to have to learn in order to make progress. I will make this
clearer in my next version.
>
> However, I think that tests like Michael's can serve as an important
> component of a baby AI training regimen.
>
Thanks for the encouragement :D
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT