From: Ben Goertzel (ben@goertzel.org)
Date: Fri Sep 16 2005 - 04:10:04 MDT
Michael Wilson:
> Unfortunately I don't think it's likely that
> your project will produce an AGI Ben, not so much because of what little
> I know about your design, but because your research methology does not
> seem to be rigorous enough to consistently cut through the dross and
> misunderstandings in search of the Right Thing. Unfortunately while your
> actual design has changed, this meta-level approach appears to have
> remained constant over the time that you have been publishing your ideas.
Ah... what I see here is an opportunity to turn an annoying conversation
into a potentially more interesting one!
What is my research methodology for my AI project, going forward? I will
now describe it briefly...
For the next phase of the Novamente project it is as follows. We are
connecting Novamente to a 3D simulation world called AGI-SIM, where it
controls a little agent that moves around and sees and does stuff. We have
then identifies a series of progressively more and more complex tasks for
Novamente to carry out in this environment, based loosely on Piaget's theory
of cognitive development.
So, our methodology for the next phase is simply to try to get Novamente to
carry out these tasks, one by one, in roughly the order we've articulated
based on Piaget.
If we succeed, we should have an "artificial child" of a sort that can
communicate using complex if not humanlike English and that can solve
commonsense reasoning problems, posed in the context of its world, on the
level of an average 10-year-old.
I believe there is a strong argument that the step from this kind of
artificial child to a superhuman AI is not as big as the step from right
here, right now to the artificial child.
The big error to ward off in this kind of approach is overfitting: one
doesn't want to make a system that fulfills the exact goals one has laid out
by somehow "cheating" and being able to do essentially nothing but fulfill
those goals. However, I think the risk of that in this case is low because
the system we're using, Novamente, was designed with much more general aims
in mind, and the key algorithms in it have already been used in other
applications such as data mining and rule-based language processing.
This next-phase methodology may well prove that what we've done in the last
phase -- build a first working version of the Novamente core, and then
engineer and partially tune the cognitive modules of the Novamente system --
is completely inadequate. It may prove that PTL (probabilistic term logic,
my own twist on probability theory) sucks and should be replaced by
something more standard; or, it may suggest that explicit probabilistic
reasoning is a bad idea for AI and one should instead try neural net type
approaches that lead to probabilistic reasoning type behavior on an emergent
level. It may show that the Novamente algorithms seem basically sensible
but will need 1000 times more processing power than is feasibly achievable
right now.
Or, it may prove that the Novamente design is well-founded, but just needs a
few tweaks and modifications to pass through the learning stages....
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT