From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Mon Dec 01 2008 - 04:12:48 MST
>> Resources aside, killing agents that might compete with you (including by
>> building another AGI with different goals) is a convergent subgoal.
>>
>> http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
>>
>> npt
>
> Certainly it would be possible to design AIs with such goals. I think it
> would be rather silly to do so, however.
Killing humans is not the big risk - lethal indifference to humans is
the big risk.
> Also, there are excellent reasons to suspect than any successful AI would
> choose satisficing over maximizing. The maximal condition, e.g., is
> excessively difficult to achieve, so one would spend all ones effort
> controlling resources rather than achieving ones more central goals.
Not strictly reassuring; when the cost of killing off all humans drops
significantly below the gain of using all human-owned ressources, then
the satisficing solution is to kill us all.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT