Re: Military Friendly AI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jun 27 2002 - 13:13:40 MDT


Ben Goertzel wrote:
>
> Eliezer wrote:
>
>> Despite an immense amount of science fiction dealing with this topic, I
>> honestly don't think that an *infrahuman* AI erroneously deciding to
>> solve problems by killing people is all that much of a risk, both in
>> terms of the stakes being relatively low, and in terms of it really not
>> being all that likely to happen as a cognitive error.
>
> I disagree.
>
> In my book, if the infrahuman AI that thinks in this has decent odds of
> evolving into a superhuman AI ... then this killer infrahuman AI is a
> really serious problem!

Depends on whether the infrahuman AI is likely to develop into an SI while
maintaining (what you hold to be) the moral errors of its programmers. An
AI that retains the moral errors of its programmers is a deadly serious
problem whether or not it has the specific problematic habit of killing people!

>> A disagreement with a transhuman AI is pretty much equally serious
>> whether the AI is in direct command of a tank unit or sealed in a lab
>> on the Moon; intelligence is what counts.
>
> No the comfort level of the AI with killing people also counts, it seems
> to me.

I don't see why you could or would expect of an infrahuman AI that it would
exceed the wisdom of its programmers so early. Why would you expect
anything else of an AI developed by honorable soldiers?

>> Ben, what makes you think that you and I, as we stand, right now, do
>> not have equally awful moral errors embedded in our psyche?
>
> Well, based on your various disturbing comments in these recent threads,
> I'm a lot more sure about me than about you ;-)
>
> According to your recent posts,
>
> a) an AGI project forming an advisory board of Singularity wizards is
> suicidal

SIAI has already considered forming an advisory board. It was when James
Higgins proposed giving the "group" (note: not using the C-word) control
over all Singularity projects that I started having serious problems.

James Higgins wrote:
>
> Ideally, I think deployment (kick off) of a Singularity project would be
> impossible without the agreement of this group. (the keys would not be
> in the possession of the developers). All 10 people would have to agree
> in order to launch a Singularity attempt. Ideally this same group would
> oversee all potential Singularity projects, so that they could analyze,
> compare and pick the one with the best potential to be launched.

Anyway, back to Ben:
>
> b) training infrahuman AI's to kill is morally unproblematic

Obviously different AI researchers are going to have different ideas of what
constitutes a proximal moral problem. The job of Friendly AI discussion is
to come up with a strategy such that this difference of opinion makes no
difference in the long run. If someone wants to create a Friendly AI using
what I see as a morally flawed theory it is my business to come up with a
way to make the human species safe *anyway*, not to tell them to use my
morality instead. Who knows which flaws we have in common?

> c) whomever creates an AGI, intrinsically has enough wisdom that they
> should be trusted to personally decide the future of the human race

Heh. No, it's much worse than that. I didn't say that method was good,
just that every other method was worse. Pleasant dreams.

> Well, the point of view that led to these statements seems to *me* to
> embody some moral errors...
>
> Regarding your comments on the subjectivity of morality: Yes, I
> understand that my own morality, which has a tendency (though not an
> absolute one) toward pacifism, is not shared by all. This is part of my
> motivation for thinking that, when a near-human-level-AI comes about, an
> advisory board of Singularity wizards would be a good thing.

?

It genuinely strikes me as very strange that anyone would try to fix the
subjective morality problem by taking 10 nodes with subjective moralities
and letting them work it out using a human political protocol. If that was
all it took...

> Of course, this group will *still* not have an objective morality --
> there is no true objectivity in the universe -- but it would have a
> broader and less biased view than me or any other individual.

That is not even close to being good enough.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT