"SIMULATIONS: A Singularitarian Primer"

From: Mitch Howe (mitch_howe@yahoo.com)
Date: Sat Oct 06 2001 - 15:25:56 MDT

The following is the result of my efforts to summarize and make accessible
the discussions that have occurred on the SL4 list regarding living in
simulations. I wrote as though it were part of a future FAQ for newcomers
to the list, my target audience being general readers, especiialy those who
are ready to start gulping down some SL4 material.

Please let me know if I have left out obviously important issues or used
certain terms out of context. I have assumed that certain key terms (sysop,
uploading, etc.) would be linked to other explanatory articles. I have
additionally assumed that nobody on the list is looking for specific credit
for ideas regarding simulations, so while I have not directly quoted anyone,
neither have I cited any one person as a source. (Also, many of my own
ideas inevitably filled areas where list content seemed lacking in juicy
material -- but I think these are pretty sound.)

All editing suggestions are welcome.

SIMULATIONS: A Singularitarian Primer
(based on the musings of SL4 mailing list participants)
by Mitch Howe


Mankind has been creating simulations from its earliest beginnings. When
deciding whether to climb a tree to retrieve a fruit, the brain simulates
the tree, the fruit, the human, and forces such as gravity. An experiment is
run. Whether the simulation ends with the consumption of sweet fruit, or
concludes with being picked apart by a large pack of wolves after a
paralyzing back injury, a decision can be made.

In more recent times, artificial simulations performed by computers have
provided scientists with opportunities to model situations difficult to test
in the field, such as the formation of galaxies or the intricate chain of
events in a nuclear detonation. Other simulators have provided training for
dangerous or expensive tasks, such as military combat and space travel.
These same types of simulations are also adapted to entertainment, in games
that model everything from gladiatorial combat with rocket launchers to the
design and management of amusement parks.

When discussing the future, especially the possibilities that lie beyond the
Singularity, the idea of simulation greatly exceeds the detail of the most
advanced military flight simulators. It even goes far beyond the immersing
ability of today's most advanced "virtual reality" gear. The simulations
ahead could be so advanced that the uninformed individual would not even
recognize being in a simulation. These simulations could be so versatile
that they would become the preferred reality of mankind's future.

Simulations of such fidelity would, of course, require incredibly abundant
and powerful technology. While such systems could not be built today, they
are easily imagined and seem perfectly possible for a superintelligence
(SI) -- the expected eventual result of any artificial intelligence (AI)
system able to repeatedly improve on its own design. High fidelity
simulations also seem to be a logical tool for a superintelligence acting as
mankind's benevolent guardian.

The imagined future where a concerned SI regulates human affairs, preventing
as a minimum everything from murder to nuclear holocaust, is also referred
to as the Sysop scenario, in reference to a type of computer program that
regulates the activities of other programs. Truly life-like simulations
would almost certainly require the management of an impressively powerful
intelligence, and for this reason simulations and the Sysop concept are
seldom discussed independently.


Given the phenomenal capabilities an SI could have, one could question
whether a Sysop would actually need to use virtual, computerized simulations
to fully protect and serve mankind. Nanotechnology, the widely anticipated
ability to manufacture by manipulating individual atoms, could give a Sysop
nearly infinite reach in the real world. Constants like gravity and inertia
could almost seem to be manipulated at will by invisible, omnipresent
nanotech devices that respond instantly to deflect bullets or cushion falls.
Nanodoctors could render all illness obsolete. Poverty and hunger could be
conquered by limitless manufacturing capacity tailored to every individual
need. It is easily argued, however, that a reality so casually interrupted
by an outside force is, in effect, a simulation. Controlled reality -- this
broadest type of simulation -- should, however, be inherently less efficient
than other possibilities, and so would likely never be implemented.

Basic limitations in human senses are what allow a life-like simulation to
be far more efficient than controlled reality. The human eyes, for example,
cannot discern the individual pixels of a high resolution digital image from
a few feet away; Unaided, they cannot make out skin cells at any distance.
In a simulation, reality does not need to be modeled except as it could be
perceived. Skin cells do not need to exist unless they are being examined
under a microscope. A tree falling in a forest need not bother making a
sound if nobody is around to hear it.

Free from simulating or controlling reality in all its complexity, watching
over and serving humanity in an artificial environment becomes a
comparatively simple question of content delivery. Specifically, how
directly would the sensory data of an artificial environment be sent to the
human brain? Several possibilities present themselves.

The simulation might be projected externally, through special clothing or in
dedicated rooms. This could be similar to the "holodeck" from Star Trek:
The Next Generation. While external systems would be the least intrusive on
the human body, they would also require a great deal of equipment and

Simulations could also be delivered directly to the brain through the
brain's own sensory pathways - the neural lines of communication that tell
it what the eyes are seeing, the ears hearing, etc. This would be akin to
the human condition seen in the movie "The Matrix". The human body would be
preserved, but the brain would receive artificially created content
indistinguishable from that which the body would provide in a similar, but
real, environment. Direct-to-brain delivery of simulations would require
fewer resources than external systems but would still be shackled by the
need to maintain human bodies - or brains, at the very least.

Replacing the brain with an artificial system would completely eliminate the
need for infrastructure dedicated to preserving biological bodies. This
idea, known commonly as "uploading", holds that the human brain is
essentially a complex but reproducible computer. If the pathways of thought
and the mechanisms of memory can be accurately mapped then human
individuality might be transferred and preserved within a computer system,
presumably a superintelligent one. Simulations could then be delivered to
individuals as internally run programs, making the Sysop title for the
controlling entity especially appropriate.

It remains to be seen whether uploading will be possible. The brain still
holds many mysteries, not the least of which is the uncertain nature and
origin of consciousness; That which makes us self-aware may not prove
transferable. Nevertheless, of all currently imaginable types of
simulation, this last, uploaded type has the most potential. It would give
a Sysop the greatest freedom to alter the environment to ensure the survival
and fulfillment of the human race. It would also allow for a tremendously
high population, since the only limiting resources would be energy and
computing material - and any self-improving superintelligence would
eventually optimize the material from which it is made. The densest, most
efficient computing material that can be made out of a given unit of matter,
often called "computronium", would likely become the preferred state for
most matter under the Sysop's control in an uploaded future; A ham sandwich
gives a single biological human the ability to continue functioning for a
few hours, but the matter available in a ham sandwich, when converted to
computronium, might provide for the minds and worlds of a million uploaded


Before moving on to some conjectures that can be made about life in a
simulation, it must be said that many do not believe large-scale simulation
is likely or even desirable. For instance, it is logical to conclude that
if humans were uploaded to an artificial substrate they would be free to
expand their own capabilities, becoming superintelligences in their own
right. Humans that go on to obtain god-like powers (a concept
traditionally called "apotheosis") may not have any use for simulated

Also, it is possible that a chaotic and unprotected life would be more
fulfilling than an existence in even the most paradisaical simulation - a
dilemma sometimes referred to as the "gilded cage" problem. There is every
reason to think that a traditional life could be provided, in simulation, to
those who desire it, but knowing it is still a simulation might be hollow
and depressing just the same.

Of course, in an environment where most any desire could be provided, the
mind itself could likely be altered so that it did not realize it was living
in a simulation. But this opens the door to a related concern, for it might
be simplest to merely reconfigure the brain or uploaded mind to a
continually happy state. Creating a stagnant, blissed-out condition is also
referred to as "wireheading", and is potentially destructive of human
individuality and progress.

Not to be ignored, also, is the fear that moving the bulk of human
civilization into a simulation would increase its vulnerability to external
disaster or extinction. If the human race exists as billions of individuals
per cubic foot of computronium, an errant meteorite could prove more
catastrophic than all of the wars and disasters of the last five centuries.
Also, if humanity is reliant on a single superintelligence for its
existence, then should this entity ever become incapacitated the entire
species would be jeopardized. It stands to reason that anything
superintelligent would have prepared for these possibilities to the point
where unsimulated life would be far more dangerous, but the worst disasters
are, almost by definition, unanticipated.


Upon first consideration, it may seem unlikely that any meaningful
predictions can be made about what it would be like to live in a simulation,
especially as an uploaded individual. However, some logical conclusions can
be made based on one simple fact: Resources will always be finite. Even if
a Sysop converted all the matter in the solar system to computronium and the
energy to power it, there would still be limits to what it could do. The
same would hold true if the entire galaxy were consumed this way. The
capabilities of a superintelligence so endowed may seem unimaginably huge,
but they could still be defined. As a result, there must be decisions made
as to how resources are allocated.

Certain tasks are guaranteed to consume matter and energy. Bodies, brains,
or uploaded minds will probably be common and intensive operations to
maintain. It stands to reason, then, that there would be a maximum
population that a Sysop could support, necessitating the need for
regulations regarding population growth. But if humanity retains any of its
most human characteristics, there will be the desire to reproduce. A
benevolent Sysop would understand this, and probably provide a way for it to
occur. But it might set limits based on the rate at which it increases its
own capacity to maintain a growing population. These could be very
accommodating early on when capacity would likely be many orders of
magnitude greater than needed. These limits may become very strict,
however, as new computing material ceases to become available. There is,
after all, no reason to believe that uploaded minds, or even biological
brains in the care of a superintelligence, would ever free up resources by
dying of "natural" causes.

Traditional human reproduction would not necessarily be the only means of
population growth. It should be possible to create simulated human minds
from scratch. These would consume the same resources as uploaded minds, and
ethically they should be entitled to the same rights and privileges as
anyone else. They would certainly be no less human or intelligent by any
measurable trait. Hence, there would likely be regulations regarding the
creation of new minds from scratch at least as restrictive as those
governing traditional kinds of reproduction.

It has already been mentioned that the infrastructure needed to support a
single biological human in a traditional environment is far greater than
that needed to maintain a brain or uploaded mind in a simulation. Although
it seems unlikely that a benevolent, friendly Sysop would force uploading or
simulation on anyone not born that way, these options are sure to be
strongly encouraged. This should not be too difficult, since the lifestyle
available in simulation promises to be far more enjoyable than the
traditional life so famously described by the thinker Hobbes as short,
brutish, and chaotic. Pain, hunger, illness, poverty, crime, war, and even
death need not enter the pearly gates of simulation.

Simulations themselves would also consume resources, however, meaning
communal simulated environments should be more efficient than private
simulations for all. Humans are evolved social creatures, and there is
every reason to believe that they would generally prefer to share large
simulations anyway. Communal simulations require a more complex system of
Sysop intervention, however, to insure that nobody's basic rights are
involuntarily infringed upon. Murder and rape would be wholly prohibited.
Nevertheless, a benevolent Sysop would likely wish to provide some sort of
venue for those who wish to experiment with forces and actions that would
not involve harm to sentient beings but would still infringe upon the rights
of others. So it seems reasonable that life in simulation would involve
some combination of large, communal environments and specialized, private
workshops. The Sysop could determine some means of equitably doling out
available resources to allow for those who crave the power of private


Beyond the finite nature of resources, little can be known for sure about
the nature of life in simulation, but speculations are plentiful and make
for interesting discussions. One that has probably been covered more than
any other, for thousands of years in fact, is the assertion that all of us
are already living in a high-fidelity simulation, or some kind of dream that
does not represent "true" reality. Though impossible to disprove, this
concept is typically weakened by the inability of its proponents to
adequately explain the purpose of a simulation that includes seemingly
pointless pain and suffering in abundance. (Religious arguments are often
introduced into this discussion, usually resulting in inconsistent or
circuitous logic.) A common sentiment is that any entity uncaring enough to
allow simulation participants to experience such generally brutish
conditions would probably not take enough interest in humanity to bother
creating and maintaining a simulation in the first place; An unconcerned SI
would be better off eliminating or ignoring humankind entirely, making it
unlikely that we are currently living in a simulation.

Pondering life in a simulation leads many to wonder what kinds of
institutions might still prove useful to humanity. For instance, some
speculate that a barter or capitalist economy might thrive, with computing
resources as the ultimate medium of exchange. The idea is that a Sysop
might equitably distribute computing power to all its dependents (often
called "citizens"), but that these individuals might in turn trade this
precious commodity for goods and services that cannot be simulated. The
most common counterpoint made is the assumption that there would be no
obtainable good or service that the Sysop could not simulate, making trade
between citizens pointless. However, given the general understanding that
sentient persons could not be simulated without becoming sentient wards of
the Sysop in their own right, it seems probable that certain pleasures or
services could only be satisfactorily provided by other citizens: those
calling for the most subtle human touch or requiring voluntary infringement
of protected rights.

The nature of time in simulation also lends itself to contradictory
viewpoints, especially in the context of uploaded individuals. Some see
perceived time as being accelerated by the speed of the hardware upon which
mind and simulations are run - an uploaded person living a hundred years,
for example, in ten minutes of "real" time. Others see it possible that the
speed of uploaded thought and simulations could be slowed to conserve
energy - an uploaded mind experiencing ten seconds, for instance, in 100
years of real time. Naturally, still others predict a middle solution where
simulated life is clocked to match real time, perhaps in homage to history
or tradition.

Related debates concern the longevity of uploaded citizens, such as whether
it might be appropriate for otherwise immortal minds to be terminated in
order to allow everyone interested the experience of reproducing another
sentient - an interesting and controversial tradeoff should the Sysop
exhaust all means of obtaining new resources. The right of an individual to
reproduce might be tied to an obligation to "die" after a predetermined
amount of time. Ethical dilemmas complicate this seemingly straightforward
approach, such as the possibility that someone could enter into a
reproduction/death contract without fully appreciating the consequences -
after all, the consequences of death are not understood now and may not be
in the future, either. Logically, a superintelligence ought to be able to
make competent judgements about the maturity of each citizen, but it might
still determine that the fairest course of action would be to universally
allow reproduction and universally terminate all citizens upon reaching a
certain age - or to deny reproduction entirely if resource limits are

Harking back to earlier discussions regarding the security of civilization
in a simulation, another interesting line of thought concerns the idea of
"backups" - identical copies of a citizen's mind, presumably for the
purpose of redundancy in case of disaster. The logical counterpoint to this
concept is that a duplicate of a mind would, unless exposed to the exact
same environment, diverge from its original in memory and function. The
backup would thus not be a backup at all, but a new and unique sentient. Of
course, when discussing simulations it may be possible to exactly replicate
the original's environment for the sake of the backup, but many would argue
that these minds would still be different people that could not be ethically
interchanged. This problem has very deep philosophical roots and may never
be resolved even if the nature of consciousness is fully understood.

Also reviving a previously mentioned concern, some feel that even if an SI
and mankind jointly determine to move to a completely simulated environment,
there may still be the ability of uploaded minds to expand and utilize more
resources for "thought". One possible scenario has the Sysop allocating
resources to everyone equally, with every individual free to use these for
any purpose, whether that be to create large and complex simulations or to
enhance the power of their own minds. This vision allows for uploaded minds
that become superintelligent prisoners, suffocated of an environment after
having converted all their available resources into mental capacity. The
ability of minds to expand their own abilities may thus need to be regulated
in order to avoid the unsavory, presumably unethical confinement of
superintelligent beings. But would restricting the capacity for growth be
any more ethical?

It should be obvious now that many a moral dilemma punctuates the envisioned
potential of simulations. Would a simulated mind's feelings be any less
valid than those found in a biological brain? Could a superintelligence
actually be entitled to more rights and resources than an individual of
traditional capacity? How much intelligence does an artificial mind need to
be considered sentient?


Easy answers to such questions do not exist, but contradictory opinions
abound. As with many controversial issues, however, there are benefits to
continued discussion. More debate may not result in consensus regarding
simulations, but should help to inspire guidelines that people can be
comfortable with. Exploring the possibilities of life in simulation can
help prepare individuals and communities for the mind-boggling potential
benefits of artificial environments while acknowledging their limitations
and requirements.

Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT