From: Ben Goertzel (email@example.com)
Date: Sat Jul 28 2001 - 12:00:22 MDT
Right now the effort to create a real AI is fragmented among a small
collection of individuals and groups.
Among the individuals on this list, we have Eli's effort, Peter Voss's
effort, the work of myself and the rest of the Webmind Diehards team, and
some others as well (James Rogers?)...
I realize that fragmentation is not necessarily a bad thing: evolution
proceeds through competition of different approaches, and the fittest
However, at a certain point it will maximize our rate of progress toward the
Singularity if we focus all our efforts on *one* AI system rather than a
(Naturally my intuition right now is that this one should be Webmind, but
that's not my point at the moment -- and if someone else comes up with a
design that looks better (or better yet a working real AI), I'll throw
myself behind it fully. I want to get real AI built; being the primary
conceptual architect is not nearly as important to me as seeing the end goal
achieved and being a part of the process ;)
What would be valuable, I think, is if we could agree upon a series of "IQ
tests" for baby AI's, that we could use to objectively assess which ones
were more promising and generally should have more attention paid to them.
I also think it may be valuable to each AI developer to see how others'
systems perform on the same tasks that his system is working on.
This isn't simple to do in general because the scope of AI is too broad:
people are making mobile robot systems, they're making computer vision
systems, mathematical reasoning systems, blah blah blah. These baby AI's
are basically different species of program, and cross-species IQ testing
However, in the context of the quest toward self-modifying AI, it may be
possible to create such tests do more easily.
Looking at the efforts of Eli, Peter and myself, there is a lot of common
ground. None of us are trying to model human perception, action or language
processing accurately or in detail. We're all making software programs
ultimately aimed at general intelligence and goal-oriented
It may be worthwhile for us to spend some time figuring out a series of
detailed "benchmarks" by which to test the progress of such systems toward
the end goal. Peter and I have been talking about this a bit; he's proposed
to use understanding of 2D shapes as an early IQ-testing context. In
testing Webmind we've decided to use his suggested 2D shape domain together
with a couple more practical domains involving real-world data. I can post
something here later about specific tests involving the 2D shape domain that
we'll run Webmind through over the next few months. (The goal is not to do
any serious vision processing, edge detection etc., just to do basic pattern
recognition and pattern creation tests in an easily-humanly-interpretable
Eli may have some ideas for simple tests involving program modification?
In the context of "schema learning" in Webmind we created some tests
involving goal-oriented program creation: some maze-running stuff, some
stuff like "learn to solve the Tower of Hanoi problem", etc.
Obviously, we all agree this is the sort of thing SIAI should be focusing on
(rather than, say, ummm... creating new programming languages (even really
cool ones) or... augmenting human brains to be smarter or birds to fly
faster (or birds to be smarter)... right?
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT