The ethics of argument (was: AGI funding)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Nov 10 2002 - 10:07:41 MST


Ben Goertzel wrote:
> Eliezer,
>
> I don't see doing well-thought-out PR campaign in favor of the Singularity
> as necessarily being a moral compromise.
>
> Some PR campaigns *are* immoral; others, in my view, are not.
>
> What moral principle do you think that a well-thought-out PR campaign
> violates?
>
> Think about it this way. Imagine you have 10 seconds to convince entity Y
> of conclusion X. But to tell Y even a reasonable fraction of what you know
> about X, will take 60 seconds. So you have to be selective. There is one
> 10-second statement about X (call it X1) that you think really captures
> what's most central and important about X; but there is another 10-second
> statement about X (call it X2) that you think will be more convincing to Y
> (ie more likely to convince Y of X).
>
> What is the most rational way to spend your 10 seconds? Telling Y X1, or
> telling Y X2?

Okay... 10 reasons off the top of my head to pick X1:

1) You are not faced with a choice of convincing Y of X using means X1 or
means X2. You rather have a choice of convincing Y of X1 or convincing Y
of X2.

2) Y has individual rights. Perhaps "I have 10 seconds to convince
entity Y of conclusion X" is not the best way to frame the problem. Maybe
the correct way to frame the problem is: "How can I help Y? What
information do I possess that Y desires?"

3) How hard have you *tried* to convey X1? Did you try at least once or
just give up immediately because it seemed too hard? If you do make the
moral compromise, will you spend at least as much time practicing how to
convey X1 as to convey X2? Is it a temporary compromise or a permanent
one? The usual answer I encounter is that it is a permanent compromise,
made without even trying it the ethical way. People who ask those
questions know enough about ethics not to make the mistake in the first place.

4) Pick a lifepath. You can assiduously practice, and become an expert
at, conveying the real truth of things in as few words as possible. Or
you can become an expert at telling people what they want to hear. Who do
you want to be?

5) The universe is an uncertain place. Good intentions are not
sufficient. They do, however, count for something. Mistakes made from
the best of intentions are still mistakes, but they are easier to recover
from because you can immediately, honestly, and openly admit to them,
rather than needing to conceal the error and its consequences.

6) Your beliefs themselves are uncertain. Obscuring beliefs about which
you yourself could be wrong adds a double layer of indirection. If you
later discover that you are wrong, it's very likely that there will be no
way to recover - no way to get from what you did convince the audience of,
to what you now believe to be the real truth. I've made mistakes, such as
insisting between 1996 and 2000 that ethical minds can knowably operate at
a moral optimum using blank-slate goal systems. Because I provided my
real thoughts underlying that conclusion, when my thoughts changed it was
not an impossible distance to argue the new conclusion. Had I chosen
persuasive-sounding arguments that were not my own original reasons for
believing, it is unlikely that the old arguments would have had anything
at all in common with the new conclusion.

7) If you start out by immediately compromising your principles, it isn't
likely there'll be anything at all left of them by the time you're
finished. When you're starting out is exactly the time to be most strict.

8) Why is it that people don't even seem to realize that X2 is a risk?
You can argue about whether it's a necessary risk or an ethical risk to
take, but it's most certainly a risk. Have you spent as much time
thinking about all the ways X2 could go wrong - whether or not it's
morally acceptable - as you have arguing yourself into the idea that you
should be allowed to do it?

9) What will people who repeat X2 and themselves add extra distance for
persuasive power create in the way of X2.2? How about X2.2.2? Maybe you
want to convey X1 yourself so that people who repeat the statement will
convey X2 rather than Ifni-knows-what.

10) And above all: There are known bugs in the human mind that make it
likely that you are underestimating the size of any given moral
compromise, including the moral compromise represented by arguing X2. So
be careful, dammit! I am reminded of the following quote:

"Adultery always begins with the adulterer(s) claiming to themselves and
to others that the relationship is "harmless" because it hasn't crossed a
certain line. The line where it becomes wrong is the line where you start
having to rationalize like that."
        -- Gelfin

> Unfortunately, this is the situation one is in, when trying to get complex
> ideas across to the mass of humanity, at this stage.
>
> Is it immoral to tell Y X2 rather than X1, in order to convince them?

At the very least it's damn risky, and I'd take the coward's way out
myself. The reason why ethics exist is that, in morality, plus times plus
times plus times plus times minus equals minus. And minus times minus
equals minus. Good intentions aren't enough; it is necessary to be on
your toes.

> The moral dilemma, for me, comes up when there's another statement X3, which
> you know to be FALSE, but which you calculate has an even higher chance than
> X2 of convincing Y of X. In this case, you have the well-known moral
> dilemma of whether to lie to Y for its own good.

Answer: That's up to Y. As the default for human-level intelligences, Y
should be presumed not to wish this unless you hear a direct statement
otherwise ("Please lie to me for my own good.")

> But in the X1 vs. X2
> choice, I see no moral dilemma. [Though I do see a source of frustration,
> because as a truth-seeking individual one would always *rather* say X1 than
> X2...]

Then this is yet another good reason to choose X1. It gives you safety
margin. A moment of moral weakness will result in saying X2, not X3.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT