From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sun Nov 10 2002 - 13:03:12 MST
Ben Goertzel wrote:
> Let Y = "Institutions and people with a lot of money"
> I understand that there are risks attached to convincing Y of X via X2
> rather than X1
> The problem is that there are also large risks attached to not convincing Y
> of X at all.
> The human race may well destroy itself prior to launching the Singularity,
> if Singularity-ward R&D does not progress fast enough.
> The balancing of these risks is not very easy.
> Taking the coward's way out regarding the risks of PR, could have
> dramatically terrible consequences regarding the risks of some nutcase (or
> group thereof) finally getting bioterrorism to work effectively...
Oh. You have a *goal*. I didn't realize you had a goal. It must be okay
to ignore your ethics if you have a goal.
Anyone can be ethical when nothing much is at stake. What makes a
Singularitarian is the ability to keep your ethical balance when the
entire planet is at risk.
I'm curious. What do you propose I should do about the fact that
Novamente *would* destroy the world if it worked, given that you still
don't understand Friendly AI?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT