RE: AGI Prototying Project

From: Michael Wilson (
Date: Sun Feb 20 2005 - 15:07:21 MST

>> Unfortunately very, very few people are qualified to work directly
>> on AGI; my guess would be fewer than 1 in 10,000,000 (we haven't
>> found many of these yet).
> I agree that the percentage of people who have the right combination of
> skills and attitudes and interests for AGI work is very small, but that
> figure seems absurdly small to me.

Sorry; wrong description, my fault. The estimate was actually of how
many people would be qualified to work on Friendly AI, which is to
say the probability of their being suitable for the SIAI's
implementation team. FAI is a higher bar than AGI; it requires the
same AI knowledge and raw intelligence, but it also rules out all
the easy ways out (i.e. probabilistic self-modification, brute force
GAs) and requires the right attitude to existential risk. Actually
developing FAI theory from scratch is even worse; I'm not aware of
anyone other than Eliezer who has made significent progress with it.
> I can think of a few folks on this list who would probably be good
> contributors to an AGI project, IF they chose to devote enough of their
> time to it....

Ditto. I can also think of a few folks already working on AGI who
would probably be good contributors to an FAI project, IF they chose
to change their approach to the problem.

>> Speech recognition is a largely solved problem that was and is
>> amenable to standard engineering methods. The key point is that
>> it's well defined; we know exactly what we want, we just need
>> to find algorithms that can do it.
> I think the key point is that it's easier than AGI, not that it's
> more exactly defined.

Speech recognition is a problem far smaller in scope than AGI, but
I think the type of definition is at least as important. The Apollo
project is a good example of a very difficult technical challenge
that was tractable because it was well defined and relatively easy
to break down into manageable chunks that could be tackled by narrow
specialists. For speech recognition we have representative training
sets detailing the expected input and output; we just have to work
out what the mapping function is. For artificial intelligence we
have no such representative set of problems; we can't characterise
exactly what we want the system to do in a black-box fashion.
> I don't believe that ALL of us on this list would just waste
> your time with useless discussions, if you were to post your
> detailed AGI design ideas publicly.

No, but it's easy to get sucked into trying to answer everyone.

> CV is an interesting philosophical theory that so far as I can
> tell has very little practical value.

It's true that there aren't any implementation details provided,
but wouldn't you agree that it is a clear statement of intent?

> To state with such confidence that any AGI not based on this
> particular not-fully-baked philosophical theory "will destroy
> the world in a randomly chosen fashion" is just ridiculous.

True, which is why I didn't make that statement. If you recall
Eliezer's breakdown of FAI, there are the structural aspects
required to make an AGI do anything predictable, and then there
is a description of what you want it to do. Technically the
structural aspects (maintaining an abstract invariant that
doesn't automatically wipe out the human race) aren't FAI; you'd
need the same thing to implement any personal agenda. If you
converted your 'joyous growth' philosophy into a provably stable
goal system I'd personally consider it a valid FAI theory, though
I'd still prefer CV because I don't trust any one human to come
up with universal moral principles.
>> Because the inner circle are known to be moral,
> Hmmm... nothing personal, but that sounds like a dangerous assumption!!

Again, yes this is a dangerous assumption. I don't like it and I'm
one of the people I'm proposing to trust with the responsibility.
However I don't see any alternative; you're asking people to believe
that you're moral enough to be trusted with AGI, James Rogers is
asking us to believe that he is and so on. The only slight advantage
the SIAI has in this area is that we're proposing to have a diverse
team of implementers all of whom have a personal veto on continuing
with the project. It's a slight advantage because groupthink can
easily set in even when you're watching for it, but it's better than
a single person laying down the moral principles for a seed AI.
 * Michael Wilson

ALL-NEW Yahoo! Messenger - all new features - even more fun!

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT