From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Dec 07 2001 - 20:14:07 MST
Jeff Bone wrote:
>
> How do you achieve the penultimate objective, which is to ensure your
> *own* survival so that you can continue to perform your other functions,
> which might be protection and perpetuation of some external constituency?
>
> Bottom line, in the limit: you cannot. Extinction of the "individual"
> --- even a distributed, omnipotent ubermind --- is 100% certain at some
> future point, if for no other reason than the entropic progress of the
> universe.
Don't you think we're too young, as a species, to be making judgements
about that? I do think there's a possibility that, in the long run, the
probability of extinction approaches unity - for both individual and
species, albeit with different time constants. I think this simply
because forever is such a very, very long time. I don't think that what
zaps us will be the second law of thermodynamics, the heat death of the
Universe, the Big Crunch, et cetera, because these all seem like the kind
of dooms that keep getting revised every time our model of the laws of
physics changes. It seems pretty likely to me that we can outlast 10^31
years. Living so long that it has to be expressed in Knuth notation is a
separate issue. Our current universe may let us get away with big
exponents, but it just doesn't seem to be really suited to doing things
that require Knuth notation.
But I can't see this as something to lose sweat about. Lasting
cognitively for 10^31 years is something I can at least touch with my
imagination; 3^^^^3 years is simply beyond me.
> ---> ABSOLUTE SAFETY FOR ANY SYSTEM, MIND, OR INDIVIDUAL IS A PHYSICAL
> IMPOSSIBILITY UNLESS YOU CAN REWRITE THE SECOND LAW OF THERMODYNAMICS.
I don't think the laws of physics have settled down yet. I admit of the
possibility that the limits which appear under current physical law are
absolute, even though most of them have highly speculative and
controversial workarounds scattered through the physics literature. I'm
not trying to avoid confronting the possibility; I'm just saying that the
real probability we need to worry about is probably less than 30%, and
that the foregoing statement could easily wind up looking completely
ridiculous ("How could any sane being assign that a probability of more
than 1%?") in a few years.
> So what does this have to do with Sysops?
Yes, that is the question...
> The point I'm making is really
> about risk; my fear is that we are too anthropomorphically constrained
> to evaluate risks in longer-than-human timescales.
I agree, especially if we have to move from exponential notation to Knuth
notation in order to express the quantities involved. (I usually talk
about Knuth notation rather than infinity because Knuth notation seems
larger.)
> The notion of
> Friendly seems to assume a particular set of imperatives for such a Mind
> that may, in fact, be unduly influenced and constrained by those notions
> of "safety," what's desirable, etc.
And here, of course, is where the real disagreement lies.
Under what circumstances is a Sysop Scenario necessary and desirable? It
is not necessary to protect individuals from the environment, except
during the very early phases of those individuals' lifespan; even if we
wished to prolong our merely human phase, we would still eventually
outgrow the need for any external protection, and would become capable of
protecting ourselves to whatever degree we found desirable. That's been
going on for millennia already and we're already a lot better at it than
we were a few centuries ago. It may or may not be necessary to protect
transhuman individuals from one another; it could be that under higher
levels of technology defense massively outweighs offense. However, even
within an impregnable barrier, there is still the possibility of the
violation of sentient rights; a mind can take matter under its strict
control and transform it into a simulation of a mind advanced enough to be
deserving of citizenship rights. If, as is quite possible, any
sufficiently advanced mind has zero probability of engaging in such
shenanigans, then the whole intelligent substrate proposal - Unix Reality,
Sysop Scenario, whatever - would be completely unnecessary except for
humans and during the very early stages of transhumanity.
I think that ruling out the possibility of an unimaginable number of
sentient entities being deprived of citizenship rights, and/or the
possibility of species extinction due to inter-entity warfare, would both
be sufficient cause for intelligent substrate, if intelligent substrate
were the best means to accomplish those ends. Is it actually the best?
Who knows? All I know is that at my current level of intelligence, this
is the means which I can imagine. Maybe you can just delete certain
sections of the branching tree of probabilities, or change the nature of
reality so that the emergent character of the universe shifts from
"neutral" to "benevolent", but I have absolutely no idea how to go about
doing those things.
Whether we are all DOOMED in the long run seems to me like an orthogonal
issue. If everyone is doomed, then everyone will still be doomed under an
intelligent substrate scenario, but at least everyone will be doomed by
inevitable environmental conditions, not by warfare or unethical creators.
Eliminate negative events if possible. Minimize their probability or
delay them otherwise. Why would a correctly created Friendly AI be unable
to understand this? Why would any real AI, Friendly or otherwise, break
down on dealing with nonabsolutes?
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT