Re: [sl4] Comparative Advantage Doesn't Ensure Survival

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Mon Dec 01 2008 - 14:10:05 MST


Stuart Armstrong wrote:
>> ...
>> Certainly it would be possible to design AIs with such goals. I think it
>> would be rather silly to do so, however.
>>
>
> Killing humans is not the big risk - lethal indifference to humans is
> the big risk.
>
I think you've missed my point.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT