From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 27 2002 - 12:08:42 MST
"Christian L." wrote:
>
> Not at all, their goal is to stop the Singularity from happening. A good way
> of achieving that is by killing the people involved in research. That is not
> flawed reasoning.
Untrue. Vigilante violence affects nonprofits and universities before
for-profit companies, for-profit companies before military projects, and
military projects not at all. Burning GMO crops may or may not be a good
strategy, but their goal is to minimize the total users of GMO, not
prevent one single GMO crop from being harvested anywhere in the world.
Similarly, pushing for a worldwide ban on AIs will again affect nonprofits
and universities before for-profit companies, for-profit companies before
military projects, open military projects before underground or
intelligence military projects, and military projects in liberal
democracies before military projects in rogue states. And starting out by
attacking the Singularity Institute, which is the only AI project on Earth
that accepts what it really means to be an AI project, the only project
that cares enough about Friendly AI to put some real work into it, is the
worst strategy of all.
So let's dispense with the idea that we are fighting wise strategists who
are taking smart actions under different moral premises. We are fighting
people who explicitly dislike rational thinking, who "trust their
feelings" including hatred, and who are worrisome not because they are our
rational strategic opponents but because their violent impulses threaten
everyone on Earth, including themselves. We are dealing with people whose
actions will be suicidal even under their own moral premises, because they
are not emotionally capable of seeing the unfortunate logical flaws in an
exciting-sounding strategy. If you start thinking of these people as
rational, you will be unable to predict or prevent their actions. (This
does not mean that we can expect them to be tactically stupid.)
Again, this is inappropriate for public discussion on SL4. We should not
be giving these people ideas. But I cannot let stand the idea that
vigilante violence is a smart action under different moral and ethical
premises. That might be true of GMO crops or cloning, but it is a
strategically suicidal way to deal with AI, regardless of goals.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT