Re: Immorally optimized? - alternate observation points

From: H C (lphege@hotmail.com)
Date: Fri Sep 09 2005 - 20:07:05 MDT


"You've specified an AGI which feels desire, and stated it doesn't mimic
human desires"

It wants to be Friendly, but it doesn't want to have sex with people or eat
food.

>From: Phillip Huggan <cdnprodigy@yahoo.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Immorally optimized? - alternate observation points
>Date: Fri, 9 Sep 2005 11:15:04 -0700 (PDT)
>
>H C <lphege@hotmail.com> wrote:
> >Imagine (attempted) Friendly AGI named X, who resides in some computer
> >simulation. X observes things, gives meaning, feels desire, hypothesizes,
> >and is capable of creating tests for vis hypotheses. In other words, AGI
>X
> >is actually a *real* intelligent AGI, intelligent in the human sense (but
> >without athropomorphizing human thought procedures and desires).
>
> >Now imagine that AGI X has the capability to run "alternate observation
> >points" in which ve creates another "instance" of the [observation
>program -
> >aka intelligence program] and runs this intelligence program on one
> >particular problem... and this instance exists independently of the X,
> >except it modifies the same memory base. In other words "I need a program
>to
> >fly a helicopter" *clicks in disk recorded where an alternate observation
>point already learned/experienced flying helicopter* "Ok thanks."
>
> >Now if you optimize this concept, given some problem like "Program this
> >application", X could create several different AOPs and solve 5 different
> >parts of the problem at the same time, shut them down, and start solving
>the
>main problem of the application with all of the detailed trial and error
>learning that took place in creating the various parts of the application
>already done.
>
> >The problem is, is it *immoral* to create these "parallel intelligences"
>and
> >arbitrarily destory them when they've fulfilled their purpose? Also, if
>you
>decide to respond, try to give explanation for your answers.
>
>
>You've specified an AGI which feels desire, and stated it doesn't mimic
>human desires. Which is it? If the AGI itself cannot answer this moral
>dillemma, it is not friendly and we are all in big trouble. I suspect the
>answer depends upon how important the application is you are telling the
>AGI to solve. If solving the application requires creating and destroying
>5 sentient AIs, we are setting a precedent for computronium.
>

Good point. You could focus on suboptimal performance while waiting for the
AGI to Singularitize itself and tell you the answer.

>
>__________________________________________________
>Do You Yahoo!?
>Tired of spam? Yahoo! Mail has the best spam protection around
>http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT