From: Olie L (firstname.lastname@example.org)
Date: Thu Feb 23 2006 - 22:42:43 MST
>From: "Philip Goetz" <email@example.com>
>Subject: Why playing it safe is the most dangerous thing
>Date: Thu, 23 Feb 2006 23:33:09 -0500
>I was just... (snip) An idea that occurred to
>me, during the section on government regulation, is that:
>- The worst possible outcome of the Singularity is arguably not total
>extinction, but a super-Orwellian situation in which the people in
>power dictate the thought and actions of everyone else -- and,
>ultimately, George W. Bush or some equivalent wins the singularity and
>becomes the only remaining personality in the solar system.
"the people in power?" At least, if they decided to enslave us, they'd bring
on some /human/ conception of slavery.
As it stands, most people in power don't want some form of slavery, although
many want Power. They want to control things, but they want 'players'
against which to pit themselves.
There's no reason for Powermongers to make people suffer, unless they do
something that pisses the Powermongers off.
Nihilist humans, by default, have empathy. Even psychopaths tend to get
enough conditioning so they generally only cause suffering where it gives
them some advantage.
By contrast, AIs have by default no empathy. Nothing remotely resembling
empathy means no restraint. An experimental-minded AI would have reason to
investigate all the facets of a human suffering, merely as a matter of
It's not like the idea of an AI dictator is something new.
>- We've already seen, with genetics, what happens when, as a society,
>we "take time to think through the ethical implications". We convene
>a panel of experts - Leon Kass & co. on the President's Bioethics
>Committee - and, by coincidence, they come out with exactly the
>recommendation that the President wants.
As a rule, they come out with an academic version of society's status quo.
Not quite the same as "what the president wants". Typically, a committee
says "individual case decisions should be made by a committee".
>- The internet is the decentralized, difficult-to-control thing that
>it is only because the government wasn't prepared for it, and wasn't
>able to supervise its construction.
>- If we consider, on the one hand, that the internet, developed
>rapidly with little regulation, worked out well and egalitarian; and
>on the other, that a cautious approach is practically guaranteed to
>lead to the worst possible outcome...
I dunno - a lot of things got hacked. . I'd rather not have my brain hacked
because I didn't know that Norton's Anti -UFAI was blitheringly useless.
The internet might feel egalitarian to you, but a lot of people don't find
>- ... we must conclude that the SAFEST thing to do is to rush into AI
>and the Singularity blindly, without pause, before the Powers That Be
>can control and divert it.
Hey, you know what? A nano-dictator would be a terrible thing. Worst
Possible Outcome - well, nearly. So let's encourage researchers to abandon
MMI's and the Foresight Institute's guidelines, and make sure that we beat
the nano-dictators to creating universal replicators!
I think that quoting the Bronze rule is an appropriate way to finish:
Do unto others... before they do unto you
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT