Re: Security overkill

From: Eliezer S. Yudkowsky (
Date: Sun May 18 2003 - 09:07:58 MDT

Philip Sutton wrote:
> Eliezer said:
>>That's the problem with outsiders making up security precautions for
>>the project to take; at least one of them will, accidentally, end up
>>ruling out successful Friendliness development.
> I don't see a problem with outsiders making up precautions or proposed
> solutions to help achieve the creation of safe AGIs. Making up is one
> thing and unilaterally imposing is another.

I stand corrected.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT