From: Nick Tarleton (nickptar@gmail.com)
Date: Mon Dec 01 2008 - 14:48:12 MST
On Mon, Dec 1, 2008 at 4:10 PM, Charles Hixson
<charleshixsn@earthlink.net>wrote:
> Stuart Armstrong wrote:
>
>> ...
>>> Certainly it would be possible to design AIs with such goals. I think it
>>> would be rather silly to do so, however.
>>>
>>>
>>
>> Killing humans is not the big risk - lethal indifference to humans is
>> the big risk.
>>
>>
> I think you've missed my point.
>
Even absent maximization, power + indifference is horribly dangerous.
http://www.overcomingbias.com/2008/08/anthropomorph-1.html
npt
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT