From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Apr 14 2002 - 19:36:06 MDT
One reason to actively work for a Singularity, fully sufficient to render it
our maximum priority in the total absence of any other consideration, is
that 1.8 people die per second, 6000 per hour, 150,000 per day. Chewing one
week off the Singularity is enough to win the Nobel Peace Prize twenty times
over.
But it's also true that there's more than that at stake; specifically, the
entire future of Earth-originating intelligent life, including all the
sentient beings who will ever exist after the Singularity - probably an
amount that dwarfs our present world into insignificance, no matter what
your discount rate. You can assume this future isn't at risk and it would
still make sense to put every available resource into the inevitable
Singularity to bring it about sooner rather than later, but it happens that
this future is, to the best of my ability to guess, endangered.
Evan Reese wrote:
>
> But aside from that, I simply believe WITHOUT PROOF that we are unlikely.
> Perhaps I simply do not want to believe that we would be so unfortunate, or
> stupid, or apathetic to cause all of humanity to be wiped out. I admit
> that.
Not to take him out of context, he went on to write:
> But my viewpoint is at least *partly* based on what I consider evidence.
Even so, I think the previous paragraph above says it all. It is *not
pleasant* to consider the absolute extinction of Earth-originating
intelligent life. In 1996 I wrote about how to accelerate a Singularity I
considered inevitable - with all the technological driving forces involved,
who could prevent it? If I recall correctly, it wasn't until a debate on
nanotechnological offense and defense on the Extropians list - where I wound
up taking the part of "offense beats defense" because the "defense beats
offense" posters were so blatantly wrong - that I realized what I had
previously been unable to look in the face; it is very easy to prevent a
Singularity. All you need to do is extinguish humanity.
Consider Conway's Game of Life - a square grid of cells where a cell "lives"
or "dies" depending on the number of adjacent living cells; two adjacent
cells maintain the current state, three cells mean life, and 0-1 and 4-8
cells mean death. Conway's Game of Life is famous for generating
astonishingly complex behaviors from these simple rules, and it has been
proven that the Game of Life is Turing-complete.
Consider a sufficient huge Life board that an intelligent species evolves
within it. Conway's Game of Life is deterministic and absolutely controlled
by the basic rules; there is no room for intervention without violating one
of the basic rules. And in fact, just to eliminate the possibility of
intervention from outside the simulation, we won't consider this Life board
as a simulation. We'll consider it as a mathematical object. We will ask
"What would happen, under the Life rules, within this universe?" By
assumption, one of the answers arrived at so far is "intelligent life
evolves". What happens to the intelligent life after that - under the
Platonic mathematical rules?
The answer is that whatever is determined by the Life rules, happens,
because as we have set up this thought experiment, this species is beyond
the intervention of God. They have no guarantee of a happy ending. They
have no guarantee that things will turn out right. There is no dramatic
unity in their universe. What happens to them is determined solely by the
forward operation of causality. If one of them accedes to the rulership of
a nation and decides to set up death camps, then eleven million Life
entities will die under the forward operation of causality. If their world
is encapsulated within a level of organization above the basic Life rules
and their physicists figure out how to disrupt that level of organization,
in the same way that (if you've ever played around with Life) one extra dot
can disrupt an entire pattern, then their world and, who knows, their whole
universe may be wiped out. Because that's what the Life rules say should
happen. It won't matter that the physicists were looking for knowledge
under a quest that had previously yielded only benefit to their species.
It's not that their universe ignores this fact, but that their universe
simply doesn't care one way or the other. If the Life rules say their world
is destroyed, their world is destroyed. They have no guarantee of a happy
ending. They are beyond the intervention of God.
Do you recognize this world? You should. You live there.
When the Wright Brothers flew their first plane at Kitty Hawk, they were
flying in their own private bubble of space and time. The universe didn't
look around to see all the previous attempts at flight that had tried and
failed. The universe didn't check to see if the Wright Brothers deserved to
win more than all the previous failed aviators. The universe didn't look
forward in time to the tremendous impact that air travel would have on our
own society. The universe didn't ask whether air travel would benefit
humanity. The *only* question asked by the universe at Kitty Hawk was
whether the wings on this contraption would generate enough lift through
Bernoulli's Principle to take the plane off the ground. It so happens that
air travel was IMHO a tremendous benefit, but not only did the universe not
care, the universe did not check. All the universe checked was the laws of
physics in Kitty Hawk's immediate vicinity. The eventual consequences of
those laws was not the universe's concern. The Wright Brothers put together
something that the laws of physics said should fly, so it did, regardless of
the future consequences or the surrounding social matrix or the ambient
memes about flight.
This isn't the world you see in books and TV shows; those worlds obey the
law of dramatic unity, where it takes an important cause to have an
important effect, and the main character, in the middle of an agonizing
emotional crisis, is never killed by a completely unrelated truck halfway
through the novel. But in real life the probability that you will be run
over by a truck is totally, absolutely unrelated to your potential impact in
the greater scheme of things. It is solely determined by your alertness
when crossing the street. We don't want to think like this because it is
too damn uncomfortable to think that the entire tragedy of WWII would not
have happened if Hitler had just happened to get bitten by a snake as a
kid. I'm not a historian and have no reason to argue the point one way or
the other (although it seems likely that Neville Chamberlain at least was
also necessary to the ensuing tragedy). But it would be perfectly plausible
in Conway's Game of Life universe, beyond the intervention of God - and
because we live in a world like that, we have no reason to suppose that the
same thing doesn't happen here.
It is not true that one individual can't make a difference. There is no
rule in which bounds the size of the consequence by the size of the effect.
It could be that Douglas Lenat gives up on the Cyc paradigm and goes back to
building an improved Eurisko, and SuperEurisko running on Blue Gene is
sufficient unto a hard takeoff, and this SuperEurisko turns out to be an
unFriendly superintelligent bacterium, and humanity is wiped out of
existence where it would have otherwise gone through a positive Singularity
and created a tremendously fun future. And it could be that Lenat happens
to stumble across Stanley Schmuck's "Obsequious AI" essay while searching
for online webcomics and decides to give Stanley the chance to fly in and
make a few adjustments to SuperEurisko, and this is enough to take humanity
through a positive Singularity where it would have otherwise been wiped out
by a superintelligent bacterium. And it could be that Stanley is run over
by a car on his way out of the airport and this is enough to cause humanity
to be wiped out where we would have otherwise lived happily ever after.
Such is life in a universe governed solely by the laws of physics and not
the laws of dramatic unity.
It may give you the screaming meemies to contemplate such enormous
consequences resting on such fragile dependencies. I know that it gives
*me* the screaming meemies to think of history being that fragile. I want
to think of the future as resting on really strong convergent supports,
maybe allowing for acceleration if we put in some hard work, but not
allowing for flipflopping between happily-ever-after and total destruction
depending on airline flight arrival times. Who knows, maybe our future does
rest on strong convergent supports and all that's really at stake is a
million lives a week. But I don't *know* that to be the case, and the basic
nature of causality in our universe gives me no right to believe something
so comforting.
I can't help but think of myself at sixteen in the Harold Washington library
randomly picking Vernor Vinge's "True Names and Other Dangers" off the
bookshelf, or my grand-uncle borrowing "Great Mambo Chicken and the
Transhuman Condition" from his library because he thought the
eleven-year-old Eliezer would be interested, or Michael Raimondi typing
"Why?" into Ask Jeeves. Would we have wound up where we are today
regardless? Maybe. But I can't prove it. All I know is that *some* parts
of my personal life history seem to rest on such strongly convergent
supports that I can't imagine my life having gone any other way, but there
are also specific important events resting on such fragile dependencies that
one time traveller sneezing could have blown them off track. And from my
scant reading of history, it looks like the overall flow of human history
works the same way. Some parts are strongly convergent. Some parts are
not.
Currently, it appears to me that some parts of the Singularity are strongly
convergent, and some parts are not. Moore's Law appears very strongly
convergent, and the development of nanotechnology has also gained enough
momentum that it doesn't look much like the sort of thing one person could
influence anymore. But there are still unconvergent parts of the
Singularity where individual efforts can have leverage, mostly because
nobody else is paying attention. It is indeed beyond absurdity to live in a
world where you can't get face time with Britney Spears because ten million
other people want the same thing, while the people who care about the
Singularity - enough to actually do something about it instead of just
talking about it - are congregated in the cheery uncrowded space of a
handful of mailing lists. But if you do manage to end up living on a planet
like that, you should try to get over the shock and incredulity as soon as
possible, so you can start taking advantage of it.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT