From: Gary Miller (firstname.lastname@example.org)
Date: Sun May 11 2003 - 14:16:41 MDT
Are we saying the intelligence required to create an FAI is greater than
the intelligence required for the Manhatten project? Because we all
know what that ended up with!
It's not that I don't trust the people now doing the work, it's just
that if it's publicized and demonstrated as doable, what's to keep
parties with less honorable motives from coming in and taking over the
The only way I see to protect the FAI from exploitation until it has
achieved near omnipotence is to conceal it's progress and mask it's
achievements by crediting them to human frontmen.
From: email@example.com [mailto:firstname.lastname@example.org] On Behalf Of
Sent: Sunday, May 11, 2003 1:26 PM
Subject: RE: Why FAI Theory is both Necessary and Hard (was Re: SIAI's
flawed friendliness analysis)
> I find it hard to believe that any human truly capable of learning and
> understanding that art would use it to do something so small and mean.
> You "find it hard to believe"? _I_ find it hard to believe you would
> use such a phrase casually or unintentionally, without awareness of
> the implications. *Especially* when such a statement is made about a
> probability of the form P(small & mean | capable)...
> I agree, Cliff.
> Human history is full of twisted atrocities that are "hard to believe"
> -- except that they actually happened.
Apparently, the kind of wisdom required to create Friendly AI is way
(and includes) the kind of wisdom required to realize the smallness and
meannes of seeking political power. If someone failed to achieve that
understanding of human morality, alas, they will hardly be able to
raise a Friendly AI.
-- Christian Rovner
Singularity Institute for Artificial Intelligence, http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT