From: Charles Hixson (charleshixsn@earthlink.net)
Date: Mon Dec 01 2008 - 15:13:12 MST
Nick Tarleton wrote:
> On Mon, Dec 1, 2008 at 4:10 PM, Charles Hixson
> <charleshixsn@earthlink.net <mailto:charleshixsn@earthlink.net>> wrote:
>
> Stuart Armstrong wrote:
>
> ...
>
> Certainly it would be possible to design AIs with such
> goals. I think it
> would be rather silly to do so, however.
>
>
>
> Killing humans is not the big risk - lethal indifference to
> humans is
> the big risk.
>
>
> I think you've missed my point.
>
>
> Even absent maximization, power + indifference is horribly dangerous.
>
> http://www.overcomingbias.com/2008/08/anthropomorph-1.html
>
> npt
Yes. This is why it would be silly to design an AI without a robust
morality. I suspect that true friendliness is impossible, but it should
be possible to achieve something better than "red in tooth and claw".
Even natural evolution usually does better than that.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT