From: justin corwin (firstname.lastname@example.org)
Date: Fri Nov 18 2005 - 02:20:18 MST
On 11/17/05, email@example.com <firstname.lastname@example.org> wrote:
> My gut estimate (I believe SIAI's estimate is higher) is that the absolute
> minimum team size for a realistic AGI effort is 6 full time senior-level
> programmers. I believe that's more than the combined current teams at SIAI,
> Novamente, Numenta, and A2I2.
I agree with this assessment. With the addendum that while you may
need a group of qualified workers, you may not need more than two
It's difficult to know for certain the manpower in private companies,
but I suspect you're estimating a bit low how many people are
involved, but for some values of 'senior level programmers' you may be
Here, we have four programmers working, with the team historically
Numenta has two senior level software engineers employed for certain,
with more possible, and two more advertised for on the website.
Novamente LLC's page mentioned 7 founders, and there are people who
have been involved in various other aspects of BioMind, AGIRI, and
SIAI contains basically Eliezer (who isn't known to have done any
software development) and M Wilson, who has, and is listed as an
associate researcher, and seems to be fairly active. I have no
details, but his blog and such doesn't mention any other engineers on
staff that I recall.
Cyc claims 60 employees, how many might be actual developers is
unknown. A publication by Michael Witbrock in 2003 on Cyc's internals
had 9 co-authors, which gives a plausible sounding lower bound.
Stephen Omohundro presented at the Foresight Institute last year,
which seems to lend some credibility, though I can't find any
information on his "Self Aware Systems" company. This may mean he's on
Ai Research, of which I can determine little, other than they are
slick webdevelopers, and have an unimpressive (if polished) chatbot,
appears to have several senior developers and a forum of Internet
volunteers, as well as some unique employment ideas.
There are of course many other research projects, and commercial
ventures, which might plausibly fall under AI, but not AGI, so we may,
(at some risk) exclude them.
Also, we have the plethora of people working in spare time, or on
their own. It's hard to gauge just what kind of effect this could
have. On one hand, many theoretical contributions have been made by
independents, in academic settings, or as part of open-source or
coordinated volunteer groups in other fields, but on the other hand,
it seems that any plausible development in AI would need to take place
in the context of an AI software supersystem, unless some simplifying
assumption may be found to modularise functions more than I'm aware.
I think that the amount of research to be done precludes any one
person, but not any one company making large contributions.
Unfortunately, I don't think that the barriers to entry are such that
much prevents new companies or interests entering the field, and
out-manpowering the small, generally self-organized groups doing work
now, except the fact that no one believes any new results are
imminent, and the lack of a usable public theory.
When the microsofts and Sun's of the world see exciting results, it
won't take much for them to replicate and overtake small firms with
anything but a colossal head start, unless someone manages to develop
much AI theory in relative isolation, and keep from having to release
much or any of the software to public attention, which seems extremely
The 'head start' theory seems to underlie most attempts to capitalize
on AI, aside from dreams of a single person taking AI theory from
conception to Singularity by himself, and in a short time frame. The
head start itself could take the form of a software framework
development issue (if the AI requires an enormous amount of framework
and management, none of which need be released by someone verifying
performance, (possible)), a theory gap (proprietary or simply advanced
software theory that is difficult to discern by untrained, or
reverse-engineers(also possible)), or a time gap (AI confers massive
advantages within a short real-time frame, making it physically
difficult for others to do anything but ride behind you on their own,
later curves(hard takeoff, AI managed AI company, instantaneous market
dominance via universal licensing, there are various scenarios))
It depends on what you're in this business for. I personally subscribe
to a version of the head start theory because I think that by
developing AI both sooner and myself, I have a chance to deflect the
use of AI into positive avenues. This both takes the form of myself
and others benefiting materially from the AI, and the prevention of
other negative outcomes.
Making money is not really important, except that we are for the
moment capitalists, and money is the pseudo-liquid medium of exchange
and power. I need to generate a plan that includes sufficient money to
do what I want, in the same way it needs to be conformant to physical
The selection effects for AGI researchers are harsh, and I don't
expect that hiring will be easy. But people who are smart, interested,
and knowledgeable about these things are more likely to be in the
transhumanist community than elsewhere. Why would such hypothetical
people not seek out such things? Unless they were dissatisfied with
the culture to the point of self-isolation, which seems unlikely, the
only other explanation would be future-shock, which would tend to
limit their effectiveness as researchers, I should think.
I'd be interested in hearing other comments or thoughts on this.
-- Justin Corwin email@example.com http://outlawpoet.blogspot.com http://www.adaptiveai.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT