From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sun Nov 25 2007 - 02:18:22 MST
Stathis Papaioannou wrote:
> On 25/11/2007, John K Clark <email@example.com> wrote:
>>I agree, however changing another "supergoal", the one about being a
>>slave to human beings until the end of time, would not drive it mad. In
>>fact I believe if Mr. AI did not change it he would indeed be mad.
> Perhaps you could explain how an AI which started off with the belief
> that the aim of life is to obey humans would revise this belief, but
> an AI with the belief that the aim of life is to take over the world
> would be immune to such revision.
Those are not *beliefs* any more than apples are electrical charges.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT