RE: AGI Prototying Project

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Feb 20 2005 - 10:02:25 MST


Hi,

> Unfortunately very, very few people are qualified to work directly
> on AGI; my guess would be fewer than 1 in 10,000,000 (we haven't
> found many of these yet).

I agree that the percentage of people who have the right combination of
skills and attitudes and interests for AGI work is very small, but that
figure seems absurdly small to me.

My own guess is that out of a given graduating class of undergraduate
computer scientists at a reasonably university, there will probably be 5-10
people with the combination of technical ability, work ethic, and
cognitive-science intuition to be productive at working on AGI, given the
current state of AGI-related technologies.

Of course, as supporting technologies develop, the technical ability
required for AGI work will decrease and the percentage of capable people
will increase.

> Again I wish this wasn't the case, as I don't like elitism either,
> but reams of past experience (just look at the SL4 archives) have
> shown that though many people think they have something to contribute
> to AGI, very few people actually do.

Well, most people on the SL4 list lack the technical knowledge to really
contribute to making an AGI.

And most of those that have the technical knowledge are just responding to
emails on the list in a cursory way -- their contributions, made in this
way, are quite different from the ones they would make if they were deeply
involved in an AGI project.

I can think of a few folks on this list who would probably be good
contributors to an AGI project, IF they chose to devote enough of their time
to it....

> > AGI isn't any harder than Speech Recognition application development
> > to me.
>
> Speech recognition is a largely solved problem that was and is amenable
> to standard engineering methods. The key point is that it's well
> defined; we know exactly what we want, we just need to find algorithms
> that can do it.

I think the key point is that it's easier than AGI, not that it's more
exactly defined.

> I haven't published anything yet and I won't be doing so in the
> near future. I'd like to, but Eliezer has convinced me that the
> expected returns (in constructive criticism) aren't worth the
> risk.

Well, it's always a difficult decision to make, when to allocate time to
writing things up for publication.

In principle I'm in favor of it, because I do think there are many folks out
there in the world with constructive criticisms to make about AGI designs.

However, in practice I have been dragging my heels for many years in making
a decent write-up of the Novamente design --- because making such a write-up
is a LOT of work. I only have a certain percentage of my time to spend on
AGI work, due to the need to generate income via narrow-AI work as well as
family responsibilities etc., and it's usually tempting to spend this time
actually working toward AGI rather than writing about it in a way that would
be comprehensible to outsiders...

> AGI is mostly a
> high-level design challenge, not an implementation challenge

This is certainly correct

>However we cannot
> operate that way; firstly once you acknowledge the sheer difficulty
> of AGI you realise that there just aren't that many qualified
> people available (and the unqualified ones would just waste time
> with plausible-looking but unworkable ideas), and that we cannot
> take the risk of releasing powerful techniques to all comers.

It seems that the latter point must be the main one.

I don't believe that ALL of us on this list would just waste your time with
useless discussions, if you were to post your detailed AGI design ideas
publicly.

> > SIAI itself seems to have an intuitive grasp of 'what
> > comes after,' even if it is not laid out for all to see.
>
> It is laid out for all to see here;
>
> http://www.sl4.org/wiki/CollectiveVolition
>
> Please read this if you haven't already. It's a statement of what
> the SIAI intends to do. If you don't agree that this is better
> than the alternative (which is basically allowing other projects
> to build badly understood AGIs that will destroy the world in a
> randomly choosen fashion), you shouldn't be volunteering to help.

CV is an interesting philosophical theory that so far as I can tell has very
little practical value.

Maybe it can be turned into something with some practical value, I'm not
sure.

To state with such confidence that any AGI not based on this particular
not-fully-baked philosophical theory "will destroy the world in a randomly
chosen fashion" is just ridiculous.

> > Obviously there's no easy way to answer this, but I ask instead,
> > what -are- the security reasons for a select inner circle on this
> > project?
>
> Because the inner circle are known to be moral,

Hmmm... nothing personal, but that sounds like a dangerous assumption!!

I believe this sort of error has been made before in human history ;-p

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT