From: Ben Goertzel (ben@goertzel.org)
Date: Thu Oct 27 2005 - 08:24:15 MDT
Richard,
Your comments pertain directly to our current work with Novamente, in which
we are hooking it up to a 3D simulation world (AGISIM) and trying to teach
it simple tasks modeled on human developmental psychology. The "hooking up"
is ongoing (involving various changes to our existing codebase which has
been tuned for other things) and the teaching of simple tasks probably won't
start till December and January.
I agree it is possible that after we teach the system for a while in the
environment, then it will reach a point where it can't learn what we want it
to. Of course that is possible. We don't have a rigorous proof that the
system will learn the way we want it to. But we have put a lot of thought
and analysis and discussion into trying to ensure that it *will* learn the
way we want it to. I believe we can foresee the overall course of the
learning and the sorts of high-level structures that will emerge during
learning, even though the details of what will be learned are of course
unpredictable in advance (in practice).
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT