RE: On Our Duty to Not Be Responsible for Artificial Minds

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Aug 14 2005 - 12:11:04 MDT


> Once you enter into the regime of technologies powerful enough to pose
> existential risks, you cannot make even one mistake, or the dice
> will never
> roll again.
>
> If you will not hold yourself to a higher standard than an
> ordinary scientist,
> then content yourself with an ordinary scientist's power.

IMO, the appropriate standard for researchers in AGI, strong nanotech, or
other highly powerful and dangerous areas is:

1) higher than the standard for scientists working in less obviously
dangerous areas

2) lower than the unreasonable standard that you propose ("total moral
responsibility for all indirect consequences of one's work")

> An optimization process steers the future into particular regions of the
> possible. I am visiting a distant city, and a local friend volunteers to
> drive me to the conference hotel. I do not know the
> neighborhood. When my
> friend comes to a street intersection, I am at a loss to predict
> my friend's
> turns, either individually or in sequence. Yet I can predict the
> result of my
> friend's unpredictable actions: we will arrive at the conference
> hotel. Even
> if my friend's house were located elsewhere in the city, so that
> my friend
> made a wholly different sequence of turns, I would just as
> confidently predict
> our destination. Is this not a strange situation to be in,
> scientifically
> speaking? I can predict the outcome of a process, without being able to
> predict any of the intermediate steps in the process.

Yes, but you can't predict this outcome with certainty. Your friend could
be hit by a bus along the way, or he could go insane, or he could be
suicidal without you knowing it and poison himself en route. Or he could
decide the conference is boring and decide to spend the day at a bookstore
instead. In those cases, you would not fairly be considered morally
responsible for his failure to arrive at the conference.

> If my friend were not intelligent, then indeed I would need a very large
> amount of computing power to guess the final destination of a
> non-intelligent
> process as complex as my friend's brain.
>
> Thus creating AI differs in kind from creating a hammer or a
> personal computer
> or a gun, and the moral responsibilities are different also.

I agree that the moral responsibilities are different, but I don't agree
that they are as extreme as your prior email implied.

-- Ben



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST