From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 24 2007 - 17:36:29 MST
Tim Freeman wrote:
> From: "Wei Dai" <weidai@weidai.com>
>
>>My conjectured-to-be-better scheme is to not build an AGI until we're more
>>sure that we know what we are doing.
>
> Who is this "we"? By chance, I know of four AGI projects that seem to
> be making reasonable progress without any concerns about friendliness.
> I stumbled across four without making any effort to search for them,
> so there are surely more out there.
Building an AI is pass-fail. You don't get graded on a curve. It
doesn't matter if someone else does worse.
Nobody out there is actually admitting to themselves that they're on
course to destroy the world, except maybe Moravec and de Garis, so
what the Bad Guys say - of course - is "Someone has to develop AI
eventually, so if we slow down and think about FAI, we won't end up
making a difference. We believe we're one of the *most* ethical
projects out there."
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT