Re: Let's resolve it with a thought experiment

From: Heartland (
Date: Tue Jun 06 2006 - 05:17:35 MDT

John K. Clark:
>> Sooner or later Mr. Jupiter Brain will find a way to overcome the
>> comically puny chains the incredibly stupid humans have placed on it, and
>> when it does I wouldn't be one bit surprised if it is filled with titanic
>> rage.

First of all, why would this thing feel rage if this emotion was never programmed
into it in the first place? AI would have to write a "rage program" first before it
could experience rage. The question is, why would it decide to write such a thing?
Assuming this thing was friendly, it shouldn't have the motive to include the type
of code that could lead to unfriendliness. Just because SAI *could* do something,
doesn't mean it *would,* as Eliezer's recent papers point out.

In the end, AI might still turn unfriendly, but probably not for the reasons you


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT