Re: Shocklevel 5

From: Alden Jurling (nakomus@cnsp.com)
Date: Fri Dec 07 2001 - 19:51:40 MST


Leaving off the sysop for the moment, lets say you are a Power and you want
to survive for as long as possible.
Lets say that you decide that to do that by spreading as quickly as
possible. As time passes the chance of any particular possible disaster
occurring rises, so your chance of survival drops. But as the space/mass
you control expands the set of disasters that could potentially destroy you
grows smaller and smaller, and your chance of survival rises. So you have a
race between increasing risk and decreasing vulnerability. Is it a foregone
conclusion that risk will win out in the long run? It seems that you might
be able to last as long as the universe can support your mass and energy needs.

Also, is there necessarily a significant difference between personal and
'species' survival? (if your backups are distributed enough any calamity
that destroyed all of them would necessarily destroy the whole species).

Im not sure if there is really a point here or not.

At 07:39 PM 12/7/01 -0600, you wrote:

>The question of risk and risk reduction is, IMO, a very interesting one
>and essential to any long-range planning activity. The point I've been
>trying to illustrate is obscured, perhaps, by a kind of "shocklevel"
>problem --- and though I've been accused of anthropomorphic reasoning and
>because Eli has himself made that claim in the past, I feel I need to
>clarify a few points.
>
>The "risk" argument has nothing to do with any anthropomorphic
>assumptions; to the contrary, the counterarguments --- and indeed
>perhaps the whole notion of "Friendliness" --- is grounded in a kind of
>anthropomorphic reasoning and constrained by shocklevel-deficient "event
>horizons." The examples I've used have probably compounded the
>misunderstanding, so let me try to put together an argument that is
>totally (or nearly so) divorced from an anthropomorphic context. I'll
>try to be very clear about the assumptions.
>
>We are explicitly ignoring the following topics:
>
>(1) Whether an immediately pre-Power mind is likely to develop along
>Friendly or malevolent lines.
>(2) Whether any particular course of action is likely to result in
>Friendliness or malevolence.
>(3) Whether an emergent Power has any interest at all in its precursors'
>individual survival.
>
>Let us assume that you are the first Power that results from human
>technological advance. Let us assume that, like all living beings that
>we are aware of, you are concerned about "survival." Let us define
>survival to be "continuity of awareness over time, perhaps punctuated."
>You might be concerned about this for your own sake, if you had an
>individual sense of self, or you might be an altruistic uberbeing
>concerned only with the survival of your own constituents. It doesn't
>matter which: the latter case implies / requires the former, as the
>altruistic uberbeing must ensure its own survival in order to ensure the
>survival of its constituents. Regardless, the essential challenge is
>simple: continue to function and pursue your goals indefinitely over
>time.
>
>---> NO ANTHROPOMORPHIC BIAS
>
>Let's call you "Alpha."
>
>You (Alpha) are a Mind, but your substrate (at least initially) is normal
>matter. Let's assume that you cannot use spacetime itself as a
>computational substrate --- there is no particular reason to believe you
>can't, but there's also no particular reason to assume you can. Let's
>assume that you can turn normal matter into perfectly efficient
>computronium. Even so, whatever mass / volume makes up your substrate is
>subject to physical risks in the long-term in the form of disastrous
>events: planetary events like collisions, stellar events like novas,
>interstellar events like supernovas, reality-changing events like
>collapse of a metastable vacuum state, universal events like the heat
>death of the universe, etc. There are three general strategies for
>dealing with those risks: minimization of your "profile" relative to
>such events by minimizing the volume / mass involved in the substrate or
>changing the characteristics of your interaction with other normal
>timespace / matter, hardening physical security from such events if
>possible, or making yourself as distributed across timespace as possible
>to amortize risk from single-point failures.
>
>There is a minimal volume / mass substrate required for you to
>perpetuate.
>There is a maximum amount of physical security that can be achieved.
>The lightcone constrains maximum distribution.
>The speed of light constrains interaction across a distributed body.
>The risk of annihilation approaches unity over time no matter what.
>
>How do you achieve the penultimate objective, which is to ensure your
>*own* survival so that you can continue to perform your other functions,
>which might be protection and perpetuation of some external constituency?
>
>Bottom line, in the limit: you cannot. Extinction of the "individual"
>--- even a distributed, omnipotent ubermind --- is 100% certain at some
>future point, if for no other reason than the entropic progress of the
>universe.
>
>---> ABSOLUTE SAFETY FOR ANY SYSTEM, MIND, OR INDIVIDUAL IS A PHYSICAL
>IMPOSSIBILITY UNLESS YOU CAN REWRITE THE SECOND LAW OF THERMODYNAMICS.
>
>Welcome to Shocklevel 5. Infinite survival is an impossibility, even
>(especially) in an infinite universe.
>
>You can *maximize* survival probability over a finite time, but you
>cannot guarantee immortality even given wildly optimistic assumptions ---
>if the universe is open. (If it's closed, it might be a different story
>--- but that's a longshot.) There are specific things you *can* do ---
>moving the substrate to a dark matter basis minimizes its interaction
>with normal matter and energy, minimizing many of the risks (planetary,
>stellar, interstellar) but still leaves you exposed to reality failure or
>universal catastrophes.
>
>---> SUBSTRATE "MATTERS," I.E. HAS QUANTIFIABLE RISK IMPACT
>
>So what does this have to do with Sysops? Well, admittedly, we're on the
>far reaches of implication -wrt- Sysops. The point I'm making is really
>about risk; my fear is that we are too anthropomorphically constrained
>to evaluate risks in longer-than-human timescales. The notion of
>Friendly seems to assume a particular set of imperatives for such a Mind
>that may, in fact, be unduly influenced and constrained by those notions
>of "safety," what's desirable, etc. Evolution has numerous deadends; it
>would be horrible to condemn the human line (not physically, but
>considering all our possible antecedents) to such a deadend through
>mistakes that trade temporary security for long-term viability.
>
>Long-term species viability and long-term individual viability may not be
>compatible. If we build a system that ensures the latter, we may deny
>ourselves the former. And without long-term species viability, the
>long-term prospects for the viability of intelligence in the universe go
>to zero, as it will certainly take engineering effort on a massive scale
>to minimize some of the large-scale extinction risks.
>
>$0.02,
>
>jb

Alden Jurling



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT