Re: On Our Duty to Not Be Responsible for Artificial Minds

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Aug 09 2005 - 20:02:43 MDT


Mark Walker wrote:
>
> ----- Original Message ----- From: "Eliezer S. Yudkowsky":
>
>> Considering the relation between my parents and myself, "autonomy"
>> consists of my parents being able to control a small set of variables
>> in my upbringing and unable to control a much larger set of variables
>> in my cognitive design. Not because my parents *chose* to control
>> those variables and no other, but because my parents were physically
>> and cognitively *unable* to select my genome on the basis of its
>> consequences. Furthermore, my cognitive design - fixed beyond parental
>> control - determined how I reacted to parental upbringing. My fixed
>> cognitive design placed some variables within my parents' deliberate
>> control, in the sense that they could, by speaking English, ensure I
>> would grow up speaking English. However, some variables that my
>> parents greatly desired to control, such as my religion, were beyond
>> the reach of their best efforts at upbringing. It is not that they
>> chose not to control this variable but that they were incapable of
>> controlling it.
>>
> I can't speak for your parent's abilities, but if I were trying to bring
> you up religious, and the fate of the universe rested on your believing
> in God, I think I would have made sure that you did not learn to read or
> write for a start.

That's pretty difficult if the target is supposed to grow up into a
conventional Orthodox Jew.

>> In the case of an AI researcher we have many, many possibilities.
>> Here are some possibilities that occur to me:
>>
>> 1) The AI researcher is fully capable of choosing between AI designs
>> on the basis of their consequences, and chooses an AI design which
>> invokes no significant moral processing within the AI. In this case I
>> would assign moral responsibility to the AI researcher alone, for all
>> consequences good or ill; the AI itself is not a moral agent.
>>
> As I said, I am a consequentialist whore so I think there might be cases
> where this is permissible. However, I think it is prima facie
> impermissible to make persons who are not moral agents. (We are
> imagining the AI is a person right?)

In case (1) I'm presuming a pure Bayesian decision system or like optimization
process, without quirks of reflectivity that lead into the human delusion of
consciousness. Exhibition of possibility: natural selection is an
optimization process with cumulative pressure powerful enough to poof
primordial soup into zebras, a feat we would call intelligent if a human
performed it. But natural selection is neither a person, nor sentient, as I
currently define those terms.

>> I assign full responsibility to the AI researcher for all
>> consequences, intended or unintended. An AI researcher has a
>> responsibility to choose an AI design with predictable consequences.
>> If the AI researcher negligently uses an AI design the AI researcher
>> can't predict, the AI researcher is still fully responsible for all
>> actual consequences.
>
> Yup.
>
>> 2) A competent AI researcher, acting on a perceived moral
>> responsibility to create moral children, deliberately constructs an AI
>> which is tightly or loosely analogous to a human - such that the
>> initial design responds to environmental stimuli and parental
>> upbringing much as does a human, learns and argues morality much as
>> does a human, is partially unaware of its own emotions like a human, etc.
>>
>> We presume that this is a deliberate attempt to create a child of
>> humankind, a worthy participant in the story of Earth-originating life.
>>
>> In this case I would assign a mixture of moral responsibility to the
>> AI researcher and to the AI. The AI, having been deliberately
>> constructed as a moral agent, bears responsibility for its actions.
>> If we hold the AI to account, it will understand what it means to be
>> held to account, and treat this as a moral argument in the same way we
>> do. (In contrast a paperclip maximizer would care about human moral
>> arguments only as a sort of cognitive activity in humans that might be
>> exploited to create paperclips.) The AI researcher is responsible for
>> all predictable consequences of "constructing a humanlike moral
>> agent", including liability for child abuse if later authorities
>> determine the initial design to have been botched. But I would not
>> say that the AI researcher is responsible for all actions of the
>> created AI, presuming that the created AI was at least as initially
>> benevolent as an average human. Deliberately creating an AI that is
>> worse than average, for example, an AI that starts out with the same
>> emotional makeup as an autistic or a serial killer, makes the AI
>> researcher liable for both child abuse and for the consequences of the
>> AI's actions.
>
> Much like human parents must take some responsibility for their
> children's actions. Is there a point where the AI researcher is off the
> moral hook, like we think human parents are after their children reach a
> certain age,

Providing the parents didn't abuse the child so greatly as to prevent his/her
"normal" human growth.

I presently see two incompatible views of this point, with only a slight overlap:

1) If you create an AI that is as good as an average human and provide a
decent upbringing, you're off the hook after it grows up. If you tilt the
cognitive scales so hugely in favor of kindness and love that the outcome is
deterministic, then you have deprived the offspring of moral autonomy (a sin)
and you are never off the hook.

2) Creating an average human, if you have the opportunity to do better,
constitutes child abuse (a sin). You are obligated to do better than average
- how much better not being specified.

The slight overlap between these views is creating an AI who starts off as
good as a really good human, but not with emotions skewed so greatly toward
niceness as to be out of the human regime.

> or is there something fundamentally different about
> creating an AI?

*Yes*, there is something fundamentally different about creating an AI! There
is something *hugely* different about creating an AI! The decisions and moral
responsibilities are those of creating a new sentient species, not those of
raising a child.

One who seeks to create a child of humankind is a higher-order parent, faced
with a vastly greater space of options than a human mother caring for the
product of her inscrutable womb. A higher-order parent must possess far more
knowledge and far deeper understanding than a conventional human parent just
to be in the game. Consequently I hold a higher-order parent to far higher
standards; higher-order parents have far greater power and, I judge, far
stricter responsibility. That is why, contrary to my earlier aspirations, I
no longer seek to create a child of humankind - not this century, not if I can
avoid it.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:00 MST