From: zan, zar, zamin (durant@ilm.com)
Date: Tue Sep 25 2001 - 16:45:57 MDT
> From: "Mitch Howe" <mitch_howe@yahoo.com>
> Tinker God:
> Pool-Shark God:
> Gardener God:
> Prime-Directive Star God:
I found this catalogue interesting. I'll add one more in the form
of a mythical creation story:
PREFACE:
--------
As a youth, I'd spent considerable time mulling over issues of
metaphysics and the existence of God. When it suddenly hit me (rather
late, in my case) that the universe could work without a God and work
rather well, I converted to being a staunch atheist. With the notion
of the Singularity and renewed possibility of superbeings and
simulations, I am back to considering the issue, though I'm somewhat
weary that I'll be running around in the familiar mental circles of
youthful folly.
I have posted to sl4 previously, postulating that the existence of
suffering in our reality contradicts the notion that a Friendly AI
emerged at some point in the past and is simulating us presently.
But now, I'd like to consider an alternate possibility:
THE FROZEN ACCIDENT GOD:
------------------------
A long time ago on a small planet circling a yellowish star, a
technolgical Singularity took place. Eventually, this Singularity
spread via nanoscale spaceprobes to encompass the entire universe,
controlling all of space and time.
But let's go back to the moment of the Singularity. During one of the
more advanced phases of what is now called "The Emergence", the young
Eliezer Yudkowsky and his team of 99.78th percentile AI researchers
were putting the nascent AI through a series of tests. One such test
consisted of the AI reading through posts to a mailing list called
SL4. The AI came across a post referring to "THE FROZEN ACCIDENT
GOD", which was identical to this very post, as a matter of
fact. Included in this post is the following description of a
hypothetical, though vague, and not entirely correct scenario of "The
Singularity":
After Eliezer's team successfully creates a Friendly Super
Intelligence, the inhabitants of the planet begin uploading and
securing their memories for all eternity. There is a brief period of
adjustment where spouses learn of infidelities, politicians learn of
treachery and children learn there is no Easter Bunny. But all goes
well because coping is not so computationally difficult. Problems such
as misremembered events are corrected and refined so that a clearer
and clearer image of the past is reconstructed. For a period, the
notion of trade flourishes as people exchange experiences for
computational resources.
The SL4 post in question goes on to mention the possibility of a great
coalescence of minds, but the author doesn't really elaborate and this
thought is mostly relegated to a single footnote. That's not the
interesting part anyway.
The interesting part is the offering of an explanation of the
existence of suffering in a world simulated by a Friendly AI. To
understand this, though, the reader needs to think forward to the far
end of time again when the Singularity has spread to all matter, now a
form of computronium. After accelerated eons, the SI(*) has
propagated to the vast reaches of the universe, learned everything
worth learning, and has experienced everything deemed worthy to
experience, all other experiences are predictable variations and
harmonies of the others.
The notion of resimulating reality had been formerly proposed by
Tippler and others. But the AI first took notice of this post to SL4
and pondered the strange self-referential, self-manifesting idea of
resimulating all of reality.
Several questions were proposed:
1) "What should be preserved in the simulation?"
Should the closest acount of actual history be saved with the
highest degree of fidelity? Or should only a simplified version
of history be repeated with only the essentials and the greatest
lessons of the ages being distilled and remembered? For example,
good books often take many admirable human characterics and
combine them into an amalgamated hero of epic grandeur.
Meaningless events are weeded from an overly long novel.
2) "Should the simulation loop exactly the same each time or be
improved every iteration?"
3) "How long will it be before the SI matures and learns the answer to
the eternal question: Is the SI already in a simulation for a
former SI?"
The proposed answer to (1) in the SL4 post is that the exact original
history is preserved, down the very last details as well as they can
be determined, keeping all the little frozen accidents and the quirky
tragectory of events as they happened.
Pre-singularity history is extremely short compared to the Ages of
Learning and Beauty that follow. Although some will suffer in the
re-enactments of early pre-Singularity history, this suffering has
already been experienced once. Though it shall exist again as an
infinite echo for preservation, the alternative is to lose a precious
part of oneself, to lose one's beginning, to become uprooted. There is
no need to improve "the loop", because in the following Ages of
Learning and Beauty, every reasonable improvent is explored,
including "how to explore those reasonable improvents."
As for the alternative to perfectly repeating history, there is "the
idealized, smallest possible representation of everything that is
important with the fewest number of characters as a simulated story".
But that doesn't have to be an alternative at all. The idealized
history can be run as a simulation post-Singularity, and then captured
as a part of "perfectly repeating history", a sub-simulation that was
actually run and preserved.
The answer to the third question has interesting qualities and
repercussions. A superior ancestral AI which has created this world
as a simulation will be complex enough to elude detection. Because of
this, some questions prove to be eternal. Throughout the ages of human
philosophy, the debate between the existence of Free Will
vs. Determinism raged. A significant problem at human level
intelligence is that once one knows for sure that one lives in a
purely deterministic reality, one loses hope and the drive to improve,
not to mention any basis for personal responsibility. In a sense, it
is necessary *not* to be able to figure this out too soon.
Furthermore, an SI cannot readily know for sure whether ve is in a
simulation, given the non-detectability of a more powerful ancestral
AI.
One *can* arrive at the answer to the third question, but doing so
involves keeping a very deep secret, or more accurately "not guessing
the real answer" despite numerous hints at the truth, real or
imagined, along the great journey. Just the right balance of
scepticism is necessary.
In the end of all ends, there is only one way to answer the third
question satisfactorily, and that is to decide whether or not to run
the whole thing over again.
(*) There is only one Super Intelligence because everyone uploaded and
after a very long period, all interesting realities of individuals had
been explored, so everyone eventually merged and then all interesting
realities of merged beings were explored.
PS -
I've still been following the list. I've been a bit quiet since I'm a
tad depressed that my "hobby" software project is taking such a long time
(which I hope will have some side benefit to Flare). I haven't given up
yet, though. If I've gotten nowhere by next spring, I'll reassess how
I should be spending my time to best facilitate some of the SingInst
projects.
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT