Re: Building a friendly AI from a "just do what I tell you" AI

From: sl4.20.pris@spamgourmet.com
Date: Tue Nov 20 2007 - 18:36:39 MST


On Nov 19, 2007 1:33 AM, Thomas McCabe - pphysics141@gmail.com wrote:
> On Nov 18, 2007 9:56 PM, Stathis Papaioannou <stathisp@gmail.com> wrote:
> > On 19/11/2007, Thomas McCabe <pphysics141@gmail.com> wrote:

> This is a Giant Cheesecake Fallacy. Obviously, a superintelligent AGI
> could explain how to build an FAI without destroying the world. The
> quadrillion-dollar question is, *why* would it explain it to you and
> not destroy the world, when destroying the world has positive utility
> under the vast majority of goal systems? If I suddenly became much

The basic assumption is that in the OAI's goal system destroying the
world has no utility. I have written another emails with more details
concerning this.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT