Re: On Our Duty to Not Be Responsible for Artificial Minds

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sun Aug 14 2005 - 16:59:51 MDT


Ben Goertzel wrote:
>
> IMO, the appropriate standard for researchers in AGI, strong nanotech,
> or other highly powerful and dangerous areas is:
>
> 1) higher than the standard for scientists working in less obviously
> dangerous areas
>
> 2) lower than the unreasonable standard that you propose ("total
> moral responsibility for all indirect consequences of one's work")
>
> [snip]
>
> I agree that the moral responsibilities are different, but I don't
> agree that they are as extreme as your prior email implied.
>

Ben, I too think that the responsibility level was overstated, but my
reasoning may differ from yours. The human concept of responsibility is
part of our social system, as pointed out tangentially by Peter de Blanc in
an recent post (Subject: "Responsibility"). Because of the existential
risks involved in creating AI, the metaphorical concept of *levels of
responsibility* breaks down. To give an example of how it breaks down: if
humanity were to be wiped out there would be no-one around to hold or be
held responsible. Using the concept of 'responsibility' to talk about
persons incurring existential risks is misleading, rather like using the
concept of 'assertiveness' to talk about homicidal maniacs. There is no
level of 'responsibility', no matter how high, that is applicable to the
possibility of ending of humanity. Other words and concepts are needed.

Eliezer wrote:
> I assign full responsibility to the AI researcher for all
> consequences, intended or unintended. An AI researcher has a
> responsibility to choose an AI design with predictable
> consequences.

This only makes sense if it is possible to choose an AI design with
predictable consequences. That is not possible. If the phrase were changed
to "possible to choose an AI design with predictable behaviors", then the
sentence makes sense. It is possible to build something with predictable
behaviors, and I would largely agree with assigning responsibility to the
researcher in that case. There is some smaller amount of responsibility
that should be assigned to other people who know about the researcher's
work, and an even smaller amount to society in general IMO.

Consequences are things that happen whether our models predict them or not.
An AI may have more sophisticated models with greater powers of prediction
than a human being, but not infinite powers. An AI could steer reality onto
the paths we prefer, but we will never be 100% certain of the distant
consequences of our, or its actions. Consequences can be near or far from
actions that are supposed to have caused them. The distance between is
usually measured in terms of other actions and events coming between the
initial action and its supposed consequence. Greater distance often (but
not always) results in reduced attribution of causality to an action.

A lot of things can happen in a universe as big as ours, things that were
not and could not have been anticipated by an AI's designer, or the AI
itself. Should we then quail before the infinite possibilities and succumb
to the first UFAI that gets cobbled together? SIAI's very existence answers
that question.

Michael Roy Ames
Singularity Institute For Artificial Intelligence Canada Association
http://www.intelligence.org/canada



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT