Re: FAI means no programmer-sensitive AI morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 28 2002 - 20:03:27 MDT


Ben Goertzel wrote:
>>
>> The *entire point* of Friendly AI is to eliminate dependency on the
>> programmers' morals. You can argue that this is impossible or that the
>> architecture laid out in CFAI will not achieve this purpose, but
>> please do not represent me as in any way wishing to construct an AI
>> that uses Eliezer's morals. I consider this absolute anathema. The
>> creators of a seed AI should not occupy any privileged position with
>> respect to its morality. Implementing the Singularity is a duty which
>> confers no moral privileges upon those who undertake it. The
>> programmers should find themselves in exactly the same position as the
>> rest of humanity.
>
> This strikes me as phenomenally unrealistic, Eliezer.

But at least we're finally discussing Friendly AI.

> The vast majority of the Earth's population consists of deeply religious
> people. A decent percentage of these people think that a truly
> autonomous AGI is an abomination against their god!

I rather doubt that a decent percentage have any opinion at all on the
religious state of a seed AI, at least not right this minute, although of
course this may change.

In any case, there is a fundamental difference between agreeing on the human
activities that constitute the entrance to the Singularity, and protecting
the integrity, the "impartiality" if you will, of the Singularity itself.
Debating how to manage existential risks is a human problem, and, yes, it
may be impossible to please everyone.

But it should be equally *true* for every individual, whether or not the
individual realizes it in advance, that they have nothing to fear from the
AI being influenced by the programmers. An AI programmer should be able to
say to anyone, whether atheist, Protestant, Catholic, Buddhist, Muslim, Jew,
et cetera: "If you are right and I am wrong then the AI will agree with
you, not me." A Catholic programmer should be able to build an atheistic AI
because it seems nearly certain that some of the beliefs we have now are *at
least* that wrong.

> I have respect for the beauty of the world's religious belief systems,
> and for the art, poetry, music, and exalted human personalities they have
> helped produce. But I still think these belief systems are profoundly
> "wrong" in many ways.
>
> And I don't think it's reasonable to expect that transhumanists and
> traditionalist Muslims are going to be in exactly the same position with
> regard to a Singularity-approaching AGI. This just isn't plausible.

Ben, I think that if history were to proceed at the current rate, no
transhumans, no Singularity, for another hundred years, and you took the
readership of SL4 and dropped us in 2102, then all of us including me would
be shocked, outraged, and *frightened* at the options open to a solid
citizen of the 22nd century. Just because transhumanists talk about the
Singularity doesn't make us transhumans. You can't get a real picture of
the Singularity by talking with the "Singularity" about other humans; the
only way you could get a picture of the actual Singularity would be by
talking with a transhuman. Every one of our speculations about the
Singularity is as much a part of the tiny human zone as everything else we
do. The real, actual Singularity will shock us to our very core, just like
everyone else. No, I don't think that transhumanists and traditionalist
Muslims are in all that different a position with respect to the real,
actual Singularity - whatever our different opinions about the human concept
called the "Singularity".

Incidentally, don't be too fast to write off religious groups. I agree that
many religious individuals are likely to disagree about the pre-Singularity
matter of Singularitarianism, but I have also seen religious people who have
no problems with the Singularity. I won't swear that they understood the
whole thing, but what the heck, neither do we.

> A lot of traditionalist Muslims and other religious folks are going to
> think the AGI and Singularity are bad ideas because they are not in the
> future history their religion has foretold.

Again: We need to distinguish the human problem of deciding how to approach
the Singularity in our pre-Singularity world, from the problem of protecting
the integrity of the Singularity and the impartiality of post-Singularity minds.

> It *may* be possible to define a kind of generic "transhumanist ethics",
> and ensure that one's AGI is imbued with generic transhumanist ethics
> rather than with with the quirks of a programmer's individual ethics.

But a transhumanist ethics might prove equally shortsighted by the standards
of the 22nd century CRNS (current rate no Singularity). Again, you should
not be trying to define an impartial morality yourself. You should be
trying to get the AI to do it for you. You should pass along the transhuman
part of the problem to a transhuman. That's what Friendly AI is all about.

> But I am not sure how easy it will be to define a generic transhumanist
> ethics, either. For instance, we have seen that among the different
> transhumanists on this list, there are very different attitudes toward
> warfare. My strong preference is to have the AGI believe killing is
> wrong in almost all circumstances. Evidently some transhumanists
> disagree. Should I then refrain from teaching the AGI that killing is
> wrong in almost all circumstances, because by doing so, I am teaching it
> Ben Goertzel ethics instead of genetic transhumanist ethics?

Whatever you teach the AI is, under Friendly AI, raw material. The AI uses
it to learn about how humans think about morality; you, yourself, are a
sample instance of "humans", and an interim guide to ethics (that is, your
ethics are the ethics the AI uses when it's not smart enough to have its
own; *that* is not a problem). In the long run, why should the AI pay
attention to your sample material over anyone else's? Seriously, why
*should* an AI pay more attention to you than anyone else on the planet?
Morally speaking.

> I am not sure it's possible to teach a system ethics as a set of abstract
> principles, only. Maybe most of the teaching has to be teaching by
> example. And if it is teaching by example, there's going to be a fair
> bit of individual bias in the selection of examples....

Teaching the system ethics as abstract principles doesn't help at all with
the moral problem; whose abstract principles would they be? But if you give
the AI information about your own morality, it may enable the AI to
understand how humans arrive at their moralities, and from there the AI
begins to have the ability to choose its own.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT