Re: AGI project planning

From: Michael Wilson (
Date: Tue Dec 06 2005 - 09:35:16 MST

Ben Goertzel wrote:
> I suspect that if the Novamente team and I had taken more of
> an evangelical, absolutely-certain stance, then we might well
> have gotten the funding from these individuals.

Yep, I know exactly what you mean. When raising funding I have
been focusing on the near-term commercial applications of a
partly working system. Historically the most 'successful' AGI
projects to date are the ones that failed to produce AGI, but
managed to repurpose their work into profitable narrow AI
applications before the funding ran out completely. I'm picking
a selection of promising commercial applications as early I can,
and trying to ensure that that if successful they will match a
steady buildup in capability towards AGI, rather than being
last ditch fundraising efforts or a series of unrelated
distractions. This lets me be more honestly confident with
potential investors while still hopefully producing experimental
work relevant to the SIAI's eventual project. Incidentally my
recent work has put me even more in agreement with you on the
importance of implementation experience in solving the really
thorny structural issues underpinning tractable fluid reasoning,
though we are still in disagreement as to theory and

> But we presented ourselves honestly, as a group of
> individuals with different estimates of the time it would
> take to complete our project and of our ultimate odds of
> success.

It's true that AGI is somewhat all-or-nothing, but I don't
think a simple estimate of completion time is much use at all.
Arguably it's worse than useless as people often fixate on it
and then decry you if you miss the deadline. I think to be
useful you have to summarise you project plan into a set of
major components, the key challenges for each, the dependencies
between them, the resources assigned and a description of how
the various capabilities your system should have will become
available as you put the components together. Then you can
label all that with confidence-bounded completion time
estimates. Some people will probably still read it and reduce
it down to 'they say they can do it in X years', but at least
if you miss the deadline you can reference you project plan
and show where you got things right and wrong, and meanwhile
the people with a clue will be impressed that you made a
serious effort to plan your project and justify your
predictions. Personally I don't even have enough information
to do this usefully yet, but I think I'm getting steadily
closer to being able to.

None of that applies if you're trying to evolve an AGI with
cheap tricks and brute force, but you know my opinions on
that endeavour.

> All of us on the team think the project has a nontrivial
> probability of achieving human-level AGI

I'm beginning to think that the phrase 'human-level AGI'
has become an in-joke used on people who aren't clued up on
how non-anthropomorphic (non-neuromorphic) AGI is.

> Unfortunately, it seemed that presenting our opinions and
> attitudes honestly in this way turned off the investors,

They usually prefer strong technical leadership that everyone
else agrees with; having multiple people trying to impose
their own directions on a project results in disaster unless
those people are exceptionally competent and good at teamwork,
consensus building and lossless compromise. I hope that the
latter is something the SIAI will endeavour to strive for, as
the organisation continues to gain funding, staff and the
means to begin practical work. I get the impression that's
what you want your project to look like too.

> I am happy you have been able to find funding for your work
> while presenting your case honestly.

Again, this is largely because I am neither claiming to be
building an AGI nor actually attempting to build one (though I
confess that having colleagues very skilled in finding and
obtaining minor government grants helps too). The probability
of success (on my first attempt, starting now) if I tried to
simply build an AGI would be very low even if I had as much
funding and staff as I could use*. I believe that despite
also beliving that I'm probably in the top ten people with the
most chance of success in the whole field. For investors I thus
focus on making money while explaining that there is a
technological development path that will keep opening up new
application areas if pursued (though this is a path very
different from the ascent through animal, toddler, child and
then adult human capabilities that anthropomorphic projects
often claim to be following). For the SIAI, I am focused on
raising the probability that an eventual direct assault on
the AGI problem will be a success, by gathering as much
high-value data on what works, how well and why as possible.

That said, I rate the probability of being able to do /cool
new stuff/+ fairly soon a lot higher, and I think that after
five years of high-minded talk the SIAI could sure as hell
use some of that.

 * Michael Wilson

* Assuming I didn't throw all caution to the wind, though as
usual building an AGI without an FAI scheme worked out would
be pretty dangerous even with maximum precautions.

+ As in, cool to any IT-literate person, not just to people
studying the details of AI theory. :)

Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT