RE: AGI funding (was Re: Some bad news)

From: Ben Goertzel (
Date: Sun Nov 10 2002 - 08:44:03 MST


I don't see doing well-thought-out PR campaign in favor of the Singularity
as necessarily being a moral compromise.

Some PR campaigns *are* immoral; others, in my view, are not.

What moral principle do you think that a well-thought-out PR campaign

Think about it this way. Imagine you have 10 seconds to convince entity Y
of conclusion X. But to tell Y even a reasonable fraction of what you know
about X, will take 60 seconds. So you have to be selective. There is one
10-second statement about X (call it X1) that you think really captures
what's most central and important about X; but there is another 10-second
statement about X (call it X2) that you think will be more convincing to Y
(ie more likely to convince Y of X).

What is the most rational way to spend your 10 seconds? Telling Y X1, or
telling Y X2?

Unfortunately, this is the situation one is in, when trying to get complex
ideas across to the mass of humanity, at this stage.

Is it immoral to tell Y X2 rather than X1, in order to convince them?

The moral dilemma, for me, comes up when there's another statement X3, which
you know to be FALSE, but which you calculate has an even higher chance than
X2 of convincing Y of X. In this case, you have the well-known moral
dilemma of whether to like to Y for its own good. But in the X1 vs. X2
choice, I see no moral dilemma. [Though I do see a source of frustration,
because as a truth-seeking individual one would always *rather* say X1 than

-- Ben G

> -----Original Message-----
> From: []On Behalf Of Eliezer
> S. Yudkowsky
> Sent: Sunday, November 10, 2002 1:34 AM
> To:
> Subject: Re: AGI funding (was Re: Some bad news)
> Slawomir Paliwoda wrote:
> >
> > Okay, I see now what you were trying to say here. Unfortunately, it
> > looks like you completely missed the point of the whole discussion
> > which was to make the FAI research famous in order to get funding, not
> > making you famous. Getting personal fame for the right reasons would
> > not be that easy either.
> You're placing too much trust in your moral intuitions. Generally, if a
> moral compromise instinctively seems like a good idea, it's
> because in the
> ancestral environment that moral compromise would have promoted your
> *personal* reproductive fitness. It is not a coincidence that the moral
> compromises that seemed to Stalin to promise the greatest good for the
> greatest number ended up with Stalin as tribal chief and all of the
> supposed beneficiaries miserable. I don't mean to imply that this
> evolutionary motivation is explicitly represented in cognition either
> consciously or subconsciously; explictly thinking "I am riding this issue
> for the sake of personal fame" would tend to interfere with riding the
> issue for the sake of personal fame.
> It seems to your moral intuitions like compromising the message at the
> heart of the Singularity seems like a good idea, something that
> would work
> to promote the Singularity, and certainly not anything that you are doing
> for the sake of personal fame. Why does it seem like a good idea? Is it
> an empirical generalization from the history of postagricultural
> societies? Are you modeling the detailed effect of your moral compromise
> on millions of interacting people in order to predict the outcome of a
> complex social and memetic system? Of course not. It seems like a good
> idea because fifty thousand years ago, people who thought it was a good
> idea tended to end up as tribal chiefs. In the domain of politics, a
> means to an end intuitively seems like a good idea to the extent that
> carrying out that means would have served the purposes of your genes in a
> hunter-gatherer tribe, not to the extent the means would achieve its
> supposed end in our far more complex culture.
> It *is* a famous empirical generalization from the history of
> postagricultural societies that people who start out by making moral
> compromises in the service of their ideals usually end up not
> accomplishing anything toward those ideals, although their
> adaptations may
> (or may not) operate in accordance with ancestral function to place them
> in positions of personal benefit.
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT