From: H C (firstname.lastname@example.org)
Date: Fri Sep 09 2005 - 20:07:05 MDT
"You've specified an AGI which feels desire, and stated it doesn't mimic
It wants to be Friendly, but it doesn't want to have sex with people or eat
>From: Phillip Huggan <email@example.com>
>Subject: Re: Immorally optimized? - alternate observation points
>Date: Fri, 9 Sep 2005 11:15:04 -0700 (PDT)
>H C <firstname.lastname@example.org> wrote:
> >Imagine (attempted) Friendly AGI named X, who resides in some computer
> >simulation. X observes things, gives meaning, feels desire, hypothesizes,
> >and is capable of creating tests for vis hypotheses. In other words, AGI
> >is actually a *real* intelligent AGI, intelligent in the human sense (but
> >without athropomorphizing human thought procedures and desires).
> >Now imagine that AGI X has the capability to run "alternate observation
> >points" in which ve creates another "instance" of the [observation
> >aka intelligence program] and runs this intelligence program on one
> >particular problem... and this instance exists independently of the X,
> >except it modifies the same memory base. In other words "I need a program
> >fly a helicopter" *clicks in disk recorded where an alternate observation
>point already learned/experienced flying helicopter* "Ok thanks."
> >Now if you optimize this concept, given some problem like "Program this
> >application", X could create several different AOPs and solve 5 different
> >parts of the problem at the same time, shut them down, and start solving
>main problem of the application with all of the detailed trial and error
>learning that took place in creating the various parts of the application
> >The problem is, is it *immoral* to create these "parallel intelligences"
> >arbitrarily destory them when they've fulfilled their purpose? Also, if
>decide to respond, try to give explanation for your answers.
>You've specified an AGI which feels desire, and stated it doesn't mimic
>human desires. Which is it? If the AGI itself cannot answer this moral
>dillemma, it is not friendly and we are all in big trouble. I suspect the
>answer depends upon how important the application is you are telling the
>AGI to solve. If solving the application requires creating and destroying
>5 sentient AIs, we are setting a precedent for computronium.
Good point. You could focus on suboptimal performance while waiting for the
AGI to Singularitize itself and tell you the answer.
>Do You Yahoo!?
>Tired of spam? Yahoo! Mail has the best spam protection around
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT