Re: MW QT AI please phone home; another path to a Singularity?

From: Cole Kitchen (kitchenc@mindspring.com)
Date: Sat Apr 20 2002 - 19:22:09 MDT


If the "many worlds" hypothesis is true, and if transhuman AI is
possible, then the experiment proposed by Damien Broderick has
some powerful and disquieting implications even if communication
between quantum parallel universes is absolutely impossible.

Suppose one sets up the experimental apparatus and presses
"enter" to start the process. Branching out from this moment will
be universes containing *every possible* one-megabyte ASCII text,
including every possible program source code of size 1 MB or less
capable of being processed by your compiler of choice.

As Broderick notes, the outcome will be innocuous in most of the
universes arising from the experiment, with the compiler
complaining that the text is not proper source code. In a small
proportion of universes, the text will be acceptable source code
resulting in a relatively uninteresting program (say, one that
prints "the answer is 42" and terminates).

And, in a more-than-astronomically small percentage of the
outcome universes, the compiler will spit out a working seed AI.

At first, this struck me as a good thing. Just by running this
seemingly bland experiment at, say, noon, one could guarantee
that at least one parallel Earth (an endlessly proliferating
cluster of Earths, actually) would be entering a Singularity
under the protection of a Transition Guide by supper time. I
briefly considered suggesting that Eliezer add the Broderick
experiment to the list of last-ditch options in the "If nanotech
comes first" section of PtS.

Then, however, I recalled that some of the seed-AI programs
generated by the experiment would be other than Friendly. Indeed,
in some worlds, the source code automatically generated in a
single pass by the quantum randomizer and then immediately put
through the compiler would be such that, if supervised by human
programmers, it would probably have been recognized as
dangerously flawed and fixed before being fully implemented (or
never have been deliberately written in the first place). The
more initially-subtle forms of unfriendly AI would also be
brought into being, somewhere in the multiverse.

Some of the non-Friendly AIs would be severely hostile. Outcomes
worse than a gray-goo wipeout would occur. A set of Earths (and
perhaps much of the rest of the universes in which those Earths
resided) would end up as "hell polises" (see the discussion of
this concept in the "Sysop scenario" section of CFAI).

Thus (unless there are some limits to the validity of the many-
worlds hypothesis, or transhuman AI is impossible, or the minimum
source-code size for a seed AI capable of going transhuman
exceeds 1 MB), this experiment would bring into existence both
paradises and nightmare worlds--heavens and hells.

And if it turns out that transhuman AIs *can* communicate across
parallel worlds after all? You'd better hope that a nice one gets
to you before a nasty one does. (And it might not be easy to tell
which is which.)

Caution seems advisable.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT