Re: FAI means no programmer-sensitive AI morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 29 2002 - 09:38:13 MDT


Samantha Atkins wrote:
>
> Ben Goertzel wrote:
>
>>> But it should be equally *true* for every individual, whether or not
>>> the individual realizes it in advance, that they have nothing to fear
>>> from the AI being influenced by the programmers. An AI programmer
>>> should be able to say to anyone, whether atheist, Protestant,
>>> Catholic, Buddhist, Muslim, Jew, et cetera: "If you are right and I
>>> am wrong then the AI will agree with you, not me."
>
> Of course some breeds of religious people would simply claim that unless
> the AI has an immortal soul (or Buddha nature) and it is capable of
> communion with the Holy Ghost or some such that it cannot know about
> these religious matters at all. As you say below, a non-empiricist
> element.

I don't see why a Friendly AI would have trouble handling that case in the
event it turned out to be true. I say "Show kindness toward all sentient
creatures" and "Hunt down the reasons I made that statement". The AI grows
up, scans my physical brainstate, and finds that I have an immortal soul
which contributed to the generation of that statement. At which point the
AI could pray for an immortal soul from God, merge with a human so that the
combined entity would have an immortal soul, say "AI? Forget it! Wrong
future!" and terminate after constructing a few human-enhancement kiosks, et
cetera.

We are perhaps extraordinarily unlikely to find ourselves in that particular
strange situation, but might very well end up in one even stranger.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT