Re: Why playing it safe is the most dangerous thing

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Thu Feb 23 2006 - 22:56:55 MST


On Thu, 2006-02-23 at 23:33 -0500, Philip Goetz wrote:
> - The worst possible outcome of the Singularity is arguably not total
> extinction, but a super-Orwellian situation in which the people in
> power dictate the thought and actions of everyone else -- and,
> ultimately, George W. Bush or some equivalent wins the singularity and
> becomes the only remaining personality in the solar system.

Extinction is worse.

> - We've already seen, with genetics, what happens when, as a society,
> we "take time to think through the ethical implications". We convene
> a panel of experts - Leon Kass & co. on the President's Bioethics
> Committee - and, by coincidence, they come out with exactly the
> recommendation that the President wants.
>
> - A scenario in which we take time to "consider the ethical
> implications" and regulate the transition to singularity is almost
> guaranteed to result in taking those measures that strengthen the
> power of those already in power, and that seem most likely to lead
> lead to the worst possible scenario:
> Dubya-(or-Cheney)-equivalent-as-Ubermind.

SIAI is not proposing that the US government or the UN should decide how
to design a Friendly AI. SIAI is not proposing that "we, as a society"
should be thinking about how to build a Friendly AI. SIAI is trying to
build a Friendly AI. Believe it or not, individual human beings are
capable of thinking intelligently about ethics.

> - ... we must conclude that the SAFEST thing to do is to rush into AI
> and the Singularity blindly, without pause, before the Powers That Be
> can control and divert it.

I don't see how committing mass suicide is the safest thing to do.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT