RE: FAI means no programmer-sensitive AI morality

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 28 2002 - 18:42:46 MDT


> The *entire point* of Friendly AI is to eliminate dependency on the
> programmers' morals. You can argue that this is impossible or that the
> architecture laid out in CFAI will not achieve this purpose, but
> please do
> not represent me as in any way wishing to construct an AI that uses
> Eliezer's morals. I consider this absolute anathema. The creators of a
> seed AI should not occupy any privileged position with respect to its
> morality. Implementing the Singularity is a duty which confers no moral
> privileges upon those who undertake it. The programmers should find
> themselves in exactly the same position as the rest of humanity.

This strikes me as phenomenally unrealistic, Eliezer.

The vast majority of the Earth's population consists of deeply religious
people. A decent percentage of these people think that a truly autonomous
AGI is an abomination against their god!

I have respect for the beauty of the world's religious belief systems, and
for the art, poetry, music, and exalted human personalities they have helped
produce. But I still think these belief systems are profoundly "wrong" in
many ways.

And I don't think it's reasonable to expect that transhumanists and
traditionalist Muslims are going to be in exactly the same position with
regard to a Singularity-approaching AGI. This just isn't plausible. A lot
of traditionalist Muslims and other religious folks are going to think the
AGI and Singularity are bad ideas because they are not in the future history
their religion has foretold.

It *may* be possible to define a kind of generic "transhumanist ethics", and
ensure that one's AGI is imbued with generic transhumanist ethics rather
than with with the quirks of a programmer's individual ethics.

But I am not sure how easy it will be to define a generic transhumanist
ethics, either. For instance, we have seen that among the different
transhumanists on this list, there are very different attitudes toward
warfare. My strong preference is to have the AGI believe killing is wrong
in almost all circumstances. Evidently some transhumanists disagree.
Should I then refrain from teaching the AGI that killing is wrong in almost
all circumstances, because by doing so, I am teaching it Ben Goertzel ethics
instead of genetic transhumanist ethics?

I am not sure it's possible to teach a system ethics as a set of abstract
principles, only. Maybe most of the teaching has to be teaching by
example. And if it is teaching by example, there's going to be a fair bit
of individual bias in the selection of examples....

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT