From: Eliezer S. Yudkowsky (email@example.com)
Date: Sun Nov 10 2002 - 23:02:49 MST
Ben Goertzel wrote:
>>Oh. You have a *goal*. I didn't realize you had a goal. It
>>must be okay
>>to ignore your ethics if you have a goal.
> I am not suggesting to ignore ethics, in pursuit of some goal. Neither was
> anyone else in this thread, so far as I can tell.
> I was suggesting that doing PR for the Singularity might be the best (*most
> ethical*) course of action, even if it involves initially presenting aspects
> of the Singularity to the mass audience in a carefully "spun" way.
I apologize for being an idiot who fails to define his nonstandard terms.
When I say "ethics", I generally refer to globally applied constraints
on means, as opposed to "morality", which is the description of the ends.
Why be ethical? Because it is moral to be ethical. What, then,
distinguishes ethics from ordinary morality? Because ethics are made up
of general heuristics applying across a wide range of subgoals, justified
as a general heuristic abstractly, placed in a global pool, and then
tested locally for applicability. Thus the moral processing of ethics has
a different signature pattern from the moral processing of means-ends
analysis, in which you're looking for some set of specific ends that are
(as a direct first approximation) predicted to lead to some parent goal.
Ethics are useful for taking into account a wide variety of nonlocal
properties that may not easily show up when considering "Does A seem
likely to lead to B?". For example, the probability of A stomping on B's
parent goal C, the probability of A having negative side effect E, and so on.
Specifically, your means-end analysis is saying that "dumbing down the
Singularity" is a good way to get to the goal "AGI funding", and I am
attempting to point out all the negative side effects that are the reason
for the global ethical constraint "try to share ideas that you yourself
believe, rather than trying to manipulate the audience into cognitive
states that are useful for your short-term goals".
In human terms, the reason that ethics are ethics is because their utility
is *not* directly obvious in terms of means-ends analysis, and in fact
often directly contradicts means-ends analysis, which, for modified chimps
like us, means that some degree of wisdom is needed to stick to your
ethics even when it seems inconvenient. But it does make you more effective.
>>What makes a
>>Singularitarian is the ability to keep your ethical balance when the
>>entire planet is at risk.
> Yes, there are very difficult ethical decisions to be made here. I respect
> that you understand the seriousness of the decisions involved, and have
> thought hard about them; but I don't always agree with your particular
> I think your judgment that doing PR for the Singularity is unethical, is an
> incorrect judgment. On the contrary, I think it's unethical NOT to do our
> best to do PR for the Singularity -- because doing this PR is the best way
> to create the funding that will accelerate the creation of AGI, improving
> the odds of a beneficial human-level AGI coming about prior to humanity's
With respect, Ben, I think I've done a fair amount of Singularity PR over
my lifetime, so I don't think it's fair for you to say that I consider
Singularity PR to be unethical. What I do think is that Singularity PR is
very, very easy to screw up if you begin by compromising those ethical
principles that govern PR in general.
This is why I state that a key quality for Singularitarians is the ability
to keep your ethical balance when the entire planet is at stake, rather
than abandoning the pool of global ethical heuristics simply because the
"ends" in the means-ends analysis got larger than you were used to
processing. Changing the size of the ends doesn't necessarily change the
structure of the problem. You still need the ethics!
> I recognize that I don't have a bulletproof demonstration that my ethical
> judgment is correct here. There are many uncertainties involved. It's
> definitely a judgment call.
"Life is uncertain" is a fully general statement, so I am suspicious when
it is applied in support of a very specific argument like "Let's
compromise the ethical heuristics governing Singularity explanation".
That is, you're selectively applying the fully general statement "Life is
uncertain" to arguments you dislike, such as those supporting the ethical
heuristics, but not applying that statement to arguments you like, such as
your means-ends analysis. If life is uncertain shouldn't we be more
careful about our ethics, rather than less? Or to be even more specific:
"Life is more uncertain than usual" implies "Where the global ethical
pool contradicts means-ends analysis, pay more attention than usual to the
global ethical pool."
>>I'm curious. What do you propose I should do about the fact that
>>Novamente *would* destroy the world if it worked, given that you still
>>don't understand Friendly AI?
> I do not believe your alleged "fact" is a fact. So I don't think you should
> do anything about it.
Of course. Perhaps your audience doesn't believe that your alleged fact,
"Novamente would greatly benefit the human race", is a fact. Perhaps they
don't want you to manipulate them for their own good.
My ethics are what protect you from my mistakes. Your ethics are what
protect other people from your mistakes. You don't think you're mistaken?
Well, I don't think I'm mistaken about Novamente turning unFriendly
either. Ethics don't switch off when you "don't think you're mistaken".
Ethics don't switch off when the stakes get astronomically high.
"I do not believe your alleged fact is a fact, therefore I don't think you
should do anything about it" is not a moral argument that can be
communicated between you and I, because I don't believe that you're
correct in saying my alleged fact is not a fact. The "therefore" fails.
On the other hand, I do have a number of ethical principles which govern
my actions *even given* that I currently predict Novamente would turn
unFriendly, and you can communicate moral arguments to me by appealing to
those ethical principles, despite our different views of the local facts.
Actually, you don't need to communicate them to me because I already
understand that, e.g., AI projects shouldn't fight, etc., and my mind is
sanitized for your protection so that this valuable heuristic doesn't
switch off when "the stakes are high" or "I don't think I'm mistaken", as
is the usual emotionally intuitive (i.e., hunter-gatherer
> I don't think *anyone* can understand as much about Friendly AI up-front,
> prior to having near-human-level AGI's to study, as you think you understand
> about Friendly AI right now.
(Something of a side issue compared to the main thread of the argument,
but of course from my perspective you have the causality reversed. It's
not a question of deciding that I understand Friendly AI, but of first
deciding that I need to understand X about Friendly AI, followed by
expending effort until I understand X, for all X in the current theory of
Friendly AI. You're arguing from fully general uncertainty again; can you
give a specific X in Friendly AI theory that you do not think it is
possible to usefully consider in advance?)
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT