Immorally optimized? - alternate observation points

From: H C (lphege@hotmail.com)
Date: Fri Sep 09 2005 - 10:39:06 MDT


Imagine (attempted) Friendly AGI named X, who resides in some computer
simulation. X observes things, gives meaning, feels desire, hypothesizes,
and is capable of creating tests for vis hypotheses. In other words, AGI X
is actually a *real* intelligent AGI, intelligent in the human sense (but
without athropomorphizing human thought procedures and desires).

Now imagine that AGI X has the capability to run "alternate observation
points" in which ve creates another "instance" of the [observation program -
aka intelligence program] and runs this intelligence program on one
particular problem... and this instance exists independently of the X,
except it modifies the same memory base. In other words "I need a program to
fly a helicopter" *clicks in disk recorded where an alternate observation
point already learned/experienced flying helicopter* "Ok thanks."

Now if you optimize this concept, given some problem like "Program this
application", X could create several different AOPs and solve 5 different
parts of the problem at the same time, shut them down, and start solving the
main problem of the application with all of the detailed trial and error
learning that took place in creating the various parts of the application
already done.

The problem is, is it *immoral* to create these "parallel intelligences" and
arbitrarily destory them when they've fulfilled their purpose? Also, if you
decide to respond, try to give explanation for your answers.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT