From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 28 2002 - 13:14:00 MDT
James Higgins wrote:
> At 11:04 PM 6/27/2002 -0600, Ben Goertzel wrote:
>
>> 2)
>> It's important to put in protections against unexpected hard takeoff, but
>> the effective design of these protections is hard, and the right way
>> to do
>> it will only be determined thru experimentation with actual AGI systems
>> (again, experimental science)
>
> Does this indicate that there will be no fail safe system in place until
> after your AGI system has been conscious and running for awhile?
SIAI is planning to make a small grant to the Novamente project which will
pay them to do this immediately instead of in the indefinite future.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT