From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 28 2002 - 17:38:57 MDT
James Higgins wrote:
> At 09:50 PM 6/27/2002 -0600, Ben Goertzel wrote:
>>
>> Eliezer's approach to Friendliness relies on his own personal morals as
>> well. his are pretty similar to mine; for instance, he thinks that
>> preserving lives forever is a good thing. My wife, who believes in
>> reincarnation, disagrees with me and Eli on this -- according to her
>> moral standards, ending death goes against the natural cycle of karma
>> and is thus probably not a good thing....
>
> Another good reason why morality should not be decided by a single
> individual. Eliezer or Ben's morality may not allow death, thus severely
> going against Ben's wife's morals. Ben's wife's morals, however, would
> not prevent any deaths, and thus would go strongly against Eliezer's and
> Ben's (and mine). So maybe preventing deaths except where the individual
> does not want this protection is the best answer. But it takes more than
> one viewpoint to even see this questions.
Okay. Stop here. Both of you need to reread at least the opening sections
of Friendly AI, because this is a blatantly wrong representation of my views.
The *entire point* of Friendly AI is to eliminate dependency on the
programmers' morals. You can argue that this is impossible or that the
architecture laid out in CFAI will not achieve this purpose, but please do
not represent me as in any way wishing to construct an AI that uses
Eliezer's morals. I consider this absolute anathema. The creators of a
seed AI should not occupy any privileged position with respect to its
morality. Implementing the Singularity is a duty which confers no moral
privileges upon those who undertake it. The programmers should find
themselves in exactly the same position as the rest of humanity. If
morality is objective the AI should converge to it. If morality is
subjective, then you have to be content with the AI randomly selecting a
morality from the space of moralities that are as good as any other. This
is what you're asking the rest of the planet to do; how can you ask them to
do that if you're not willing to do it yourself? Ben's statement that his
AI is good for Ben Goertzel is anathema to me. What about everyone else on
the planet? Is it rational for them to try and shut Ben down? The
programmers have to find a way to place themselves in the same position as
everyone else on the planet. Again, you can claim that this is impossible
or that my proposal for doing it is unworkable, but this is what I believe
is the critical responsibility of anyone undertaking to enter the Singularity.
In the words of Gordon Worley: "Oh, well, plenty of us were
anti-Friendliness until we actually sat down and read CFAI."
At this point, Higgins, your representation of what Friendly AI is about has
diverged so enormously from the actual content of "Friendly AI" that I think
you really need to stop and reread the opening sections. The fact that you
are comparing my personal moral beliefs about the value of life or death to
Ben Goertzel's, as an indicator of who would be a better AI programmer,
indicates that we are simply not discussing the same thing when we use the
words "Friendly AI".
If you read my statement that "It takes more wisdom to build an AI than to
be on a committee" as "Anyone with enough wisdom to build an AI has enough
wisdom to directly specify its morality"... yikes, no wonder you're running
scared. I would never, ever say that.
Although, if you think an AI's morality can be directly specified by an
advisory board... well, you must see the problem as being a lot smaller. I
don't think that any human or group of humans should attempt to directly
specify an AI's morality. I think the problem is out of our reach.
It is the programmer's responsibility, in designing the Friendliness
strategy and architecture, to make sure that Ben's wife has just as much or
just as little to fear whether the Friendliness content source is Eliezer,
Ben, or Ben's wife herself. (Not that you'd use one person as a
Friendliness source in any case.)
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT