RPOP "slaves"

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Fri Aug 26 2005 - 12:50:56 MDT


Robin and Phil: I know it feels liberal, reasonable, fair, logical,
unselfish, unbigoted, and in every way moral to extend ethical consideration
to a GAI. I also know that as a species, our greatest ethical regrets are
the countless times when we withheld ethical consideration from our fellow
human beings, and that we have a long way to go before we overcome the
tendencies which make us vulnerable to such regrettible actions. However,
concerns about mistreating an AI, enslaving it or whatever, reflect deep
anthropomorphic confusion.

We are not talking about containing an organism with an evolutinary past,
selected from the search space by the removal of trillions of non-ancestors
who failed to crave freedom. We are not even talking about an organism
composed of countless agents, where belief is the interaction of excitatory
"reward" and inhibitory "punishment" on many levels of organization. We are
talking about an organism with cognitive structures onto which to attach
concepts of "reward", "punishment", "disappointment", "pain", "suffering",
"frustration", "freedom", "injustice", or any of the other evolved salient
patterns which we call values. These terms are no more properly attached to
the sort of transparent AI SIAI favors than they are to "evolution", "the
economy", or "the government". We are talking about a Really Powerful
Optimization Process, and it seems possible to me that this is a case where
using that language, RPOP, rather than AI, will greatly improve thinking.

The universe is FULL of things which may merit ethical consideration and do
not yet recieve it, from children to animals to lower level mind-like
processes taking place in our own brains possibly including structures very
loosely analogous to Freudian concepts, or to our models of other human
beings and of ourselves. It is concievable that when we better understand
ourselves we will identify other such things which I do not yet even suspect
warrant such consideration, but to guess that a RPOP is one of those things
makes no more sense than to guess this of existing software, and is in fact
somewhat less justified than moral consideration given to the discarded
programs produced by directed evolution, especially direct evolution of
neural nets.

I am not at all suggesting that all AI development strategies can be persued
without the risk of causing harm to digital beings. The construction of an
AI by reverse engineering of the human brain, as Kurzweil advocates, would
be almost certain to be preceded by numerous aborted attempts at its goal
prior to success. Partial minds would be built and studies, and their
evolved structures would interact with their simulated environments in ways
which corresponded to thousands of different exotic varieties of suffering.
AIs of this sort would be, in many ways, far less dangrous than the
transparent AIs recommended by SIAI. When thinking abou them,
anthropomorphic thinking would work. They would not suddenly display
dazzling and unexpected new abilities which could be fully utilized with
mere gigaflops of processing power. They would not be natives to the world
of code, nor naturally enabled to modify their own workings. Unfortunately,
they would not, ultimately, solve our problem. The fact that they can be
built would not make normative reasoning systems impossible. The
singularity would still beacon, and AIs modeled on our minds would be no
more likely to make the ascent in a controlled and Friendly fashion than we
would. Less actually, for many reasons including reasons analogous to those
discussed here http://www.nickbostrom.com/fut/evolution.html . There is
also the substantial risk associated with any such AIs being terribly insane
for biological and environmental reasons.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT