From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jan 23 2005 - 08:09:22 MST
> > If I were a betting man (and I am on occasion), I'd put my money on Ben.
> > Seems to me he's got Novamente halfway around the track while Eliezer's
> > still trying to decide which horse he's going to ride.
>
> And if Novamente should ever cross the finish line, we all die. That is
> what I believe or I would be working for Ben this instant.
>
> Aside from that, I don't object to your statement of fact. You
> can indeed
> move faster the less you care about safety. We'll all die when you cross
> the finish line, but hey, you were first! Yay! That is how
> people think,
> and that is what makes the planet itself unsafe, at this point in time.
>
>--
>Eliezer Yudkowsky
I'll take this opportunity to say a few things about Novamente and its
current situation.. [I'll get to Friendliness at the end]
1)
After many years of experimenting with AI components and doing prototype
system-building and related mathematical and conceptual theory, I'm pretty
confident we have an AGI design that is workable. Workable in the sense of:
Can yield human-level-and-beyond intelligence in principle; is tractable
enough to do so on a plausible-sized network of contemporary Linux boxes;
and is simple enough to be tuned, debugged and tested by a small team of
only modestly extraordinary mortals. Furthermore, we have a software design
for our AGI and have a pretty good percentage of it implemented (though
there's still plenty work to be done) and have done plenty of tuning of AI
components on various test problems.
2)
While we may well be halfway or a third of the way around the track to true
AGI, alas our pace of progress is more like a fast walk than a gallop or
trot, at this point. The reason is that the core Novamente team -- the
handful of folks who really understand the Novamente system -- are spending
most of their time working on Novamente-based commercial software consulting
projects, rather than on directly AGI-oriented work. This was OK for a
while, but we have now reached the point where we have initial versions of
the basic learning/reasoning/memory components of Novamente, and a good
initial version of the overall "Mind OS" framework in which they cooperate.
This is the point at which Novamente-AI-component-based commercial
development necessarily DIVERGES from AGI work. When we were building the
basic learning/reasoning tools, the commercial work and AGI work were
somewhat overlapping, because the same tools can be used for AGI and for
narrow AI apps. But what we need to do now for AGI is work on integrating
the different AI tools together in a more sophisticated way in the context
of having Novamente control an embodied agent in a simulated environment.
And this is not work that any of our current commercial applications
supports.
3)
Thankfully, due to a recent $7000 investment, I've been able to hire one
person to focus solely on AGI. What he's doing at the moment is building
the simulation environment in which the embodied agent will live, and
hooking this sim-world up to Novamente. But alas, one person isn't
enough....
4)
To really do the Novamente project right, at a reasonable rate of speed, I'd
need something like $500K/year for something like 3 years. I say 3 years
because that is enough time that, ABSOLUTELY FOR CERTAIN, before that time
is up we'd have results SO IMPRESSIVE that getting much more development
money would be no problem. This money would be used to pay a bunch of the
current Novamente gurus, some of whom are in the US and some in Brazil, to
work full-time on nothing but AGI.
5)
To raise this money there are two avenues open:
5a)
Make enough $$ from Novamente-based businesses to fund it ourselves. This
is not going to happen in 2005, but it could happen in 2006 or 2007, if all
goes well. Our bioinformatics work (www.biomind.com) has yielded some
really nice scientific results in the area of gene expression analysis,
we're working with the CDC and the NIH, and over the next couple years (with
a lot of effort) it should be possible to turn this into a reasonably
profitable business in the biopharma market.
5b)
Get someone to donate money for AGI research. Here there are two
categories:
5b1) Government research grants. Unfortunately the US government
research-funding establishment is extremely conservative where AGI is
concerned, and nearly all AGI-ish funding seems to go to the likes of Cyc,
SOAR and ACT-R. I have been banging my head against the
government-grant-funding wall for some time, and who knows, I may succeed
eventually, it's partly a matter of statistics. At the moment I have some
collaborators in this regard who have a lot of experience getting gov't
research grants.
5b2) Private donations. This just depends on meeting the right person who
has a substantial amount of money and an interest in using it to move
forward toward AGI. I have some contacts who meet these conditions, but am
waiting for the right moment to approach them.
6)
There are some specific things we can do to get ourselves in a better
position in order to raise private donation or government grant money.
These are:
6a)
Finally publish the long-in-process books on Novamente. This will happen in
2005, for real! Two of the 3 books in the trilogy are quite close to being
ready to go out to the publisher!! ;) [Please note, the reason these books
have been so long in coming is basically that I, the lead author, have been
spending so much of my time on commercial narrow-AI projects -- including
very cool and scientifically valuable stuff like Biomind....]
6b)
Put together a reasonably wizzy demo of Novamente doing something cool. I
really hope this will happen in 2005, but I'm not positive it will, due to
lack of human resources devoted to it. What I want to do here is have
Novamente control an agent in our AGI-SIM sim world, according to
instructions given to it in English. We have a good, interactive
English-language comprehension interface (which relies on a mix of learning
and inelegant but effective rule-based AI, which we built for a commercial
AI contract), and in a couple months the sim-world will be in good shape.
What I want to demonstrate initially is just some simple learning and
reasoning. Teach it what the word "on" means by giving it a bunch of
examples of objects on other objects. Once it knows what "Put the cup on
the table" means and knows what cups and bowls are, then show that it
automatically learns what "Put the bowl on the table means." And a whole
bunch of other analogous examples, some a bit more complex. Simple stuff --
but visually demonstrable, within a framework constructed with AGI in mind
and with detailed mathematical, conceptual and software documentation
backing up its AGI ambitions.
7)
What stands between us and our wizzy, fundraising-friendly Novamente demo
right now is simply time and money. We have the AI software framework, we
have the AI learning and reasoning tools within that framework, we have the
language-processing front end (which doesn't embody truly humanlike language
processing -- though we do know how to do that, we just haven't had time
yet -- but is still very useful for practical communication purposes, as
after a bit of interaction it does succeed in correctly translating English
sentences into Novamente's internal knowledge-representing nodes and links).
I guess that about $80K in investment or donation money would get us there
for sure, during 2005. Quite possibly less. (However, this $80K would have
to come from a source other than US government grants, because it would have
to be spent mostly outside the US in order get the needed bang for buck. If
the money has to be spent in the US then the pricetag is higher, more like
$160K.)
8)
Now, about Friendliness. I agree with Eliezer that it's a very important
thing to worry about. However, as I've stated oft before, I just don't
think we know nearly enough about AGI to meaningfully concoct theories of
AGI friendliness at this point in time. I enjoy Eli's thoughts on AGI
Friendliness very much -- but as far as I'm concerned, CFAI and Collective
Volition and so forth fall into the domain of *very interesting philosophy,
and very interesting scientific speculations*. There is nothing
*constructive* in there that's either very pragmatic or very convincing.
The main thing that Eliezer has demonstrated convincingly, IMO, is that
Friendly AI is a very hard problem! Of course, this demonstration is a
worthwhile thing. But my feeling is that, in order to get a decent feel for
the Friendliness problem, we're going to need to actually experiment with
some simple AGI systems -- systems with awareness of self, ability to
communicate with humans and to learn. Based on experimenting with such
systems in a safe and simple context, we will be able to create the elements
of a science of intelligence -- which it's hard to say we have right now.
Then we will be able to grapple with the problem of Friendly AI in a
primarily scientific rather than speculative way. Of course, at that point
the conclusion may well be that Friendly AI is impossible -- at which point
I'll shift my efforts from AGI-creation to AGI-prevention ;-) But my
*guess* is that this won't be the conclusion...
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT