Re: The inevitability of death, or the death of inevitability?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Dec 08 2001 - 15:20:15 MST


Jeff Bone wrote:
>
> "Eliezer S. Yudkowsky" wrote:
>
> > > Bottom line, in the limit: you cannot. Extinction of the "individual"
> > > --- even a distributed, omnipotent ubermind --- is 100% certain at some
> > > future point, if for no other reason than the entropic progress of the
> > > universe.
> >
> > Don't you think we're too young, as a species, to be making judgements
> > about that?
>
> No --- that's the difference between faith and science. It's not a judgement,
> it's a prediction from a model that generates predictions consistent with
> observation and measurement.

Perhaps. Understand that I do not have "faith" that immortality is
possible. I am simply stating that before we get all emotional about this
issue - that is, before we begin making value judgements or philosophical
assumptions based on it - we should remember that the model the prediction
is based on is a model which historically has often changed and currently
is still in flux.

> While the Big Crunch is a pretty specific model of a physical phenomenon, 2LT
> seems much deeper and more abstract than that --- kind of like Godel, Turing,
> etc. It's that fundamental and deep a concept --- and the greatest long-term
> risk we can predict, given certain assumptions and certain things that are
> observably true about our universe.

Well... this is something I happen to disagree with, because personally
the second law of thermodynamics strikes me as being statistical in
nature, and often rather fragile. I would not be surprised to find out
that it is utterly impossible to exceed the speed of light within a given
gravitational frame of reference, forever and amen. 2LT seems to me to
have more the character of a guideline than a rule.

> IMO, there's still refinement to do, but we're starting to converge on
> something that's a reasonably accurate (yields predictions consistent with
> observed reality) and yet general (works at all scales in all contexts) set of
> base laws. Note too that these won't be "the" laws of physics, rather "a" laws
> of physics. We can never prove (epistemological impossibility) that any given
> set of laws, no matter how accurate the predictions they yield, are the best or
> even only model of how the world works.

I think perhaps our species is too young to know that as well. I can
conceive of a model which starts at the First Cause and proceeds directly
to the universe as presently observed, with no room for error or even
illusion cast by simulators; I can conceive of an ironglad guarantee such
that even if we lived in a simulated universe, we'd know the ultimate
universe on top ran according to a certain set of laws. But this gets
into questions of the nature of "truth", which is why I say that our
species is too young to know for sure; at this present time we do not have
the capability to perform the experiments required to determine the nature
of "truth", which to me is a question bound up with the nature of
"reality", which I expect we'll find out when we start tracing back past
the Big Bang, determining the origin of the laws of physics, and otherwise
closing in experimentally on the First Cause.

> Interesting. How did you get 30%?

That's the current balance between (a) the Principle of Mediocrity as
applied to changes in human civilization over time and our current
position, and (b) my innate scientific conservatism.

> IMO, any system of "rights" in practice actually results in unresolvable
> inconsistencies and paradoxes.

"Unresolvable?" That sounds pretty strong. Can you name a single
unresolvable inconsistency or paradox?

> It may be that the optimal system for allowing independent actors to achieve
> optimal balance of competing self-interests is not a system of axiomatized
> rights coupled with protective and punitive measures (a "legal" system, or a
> Sysop) but rather a kind of metalegal framework that enables efficient
> negotiation and exchange of consensual, contractual agreements.

The two main problems with this are as stated earlier: First, the
possibility of a universe in which offense beats defense; second, the fact
that a simulated, enslaved citizen has no position from which to
negotiate. A metalegal framework might be superimposed on an intelligent
substrate, but it still requires an intelligent substrate to ensure that
no being is stripped of citizenship rights.

> > Whether we are all DOOMED in the long run seems to me like an orthogonal
> > issue.
>
> It isn't really orthogonal; it's possible that the choices we make now --- the
> "angle of attack" with which we enter the Singularity --- may prune the
> eventual possibility tree for us in undesirable ways. I don't think this is a
> reason to futilely attempt to avoid Singularity, I just think it should give us
> pause to consider outcomes.
>
> Example: let's assume for a moment that the universe is closed, not open.
> Let's further assume that Tipler's wild scenario --- perfectly harnassing the
> energy of a collapsing universe and using that to control the manner of the
> collapse, allowing an oscillating state and producing exponential "substrate"
> --- is plausible. Then the best course of action for a civilization would be
> to maximize both propagation and engineering capability to do so when the time
> comes. This very long-term goal supporting maximum longevity of the civ's
> interests may in fact be in conflict with the concept of individual rights and
> volition. Hence, putting in place a Power that favors one may prevent the
> other. That's a tradeoff that needs to be considered in any scenario of
> ascendancy.

If this is indeed an important factor to take into consideration, then my
design responsibility is to build a Friendly AI which will duplicate that
portion of your cognitive complexity which causes you to perceive as
forceful that argument which you have just presented.

> > Eliminate negative events if possible.
>
> But "negative" has many dimensions, and most of those are subjective...

... he argued, correctly assuming that the audience would perceive
"subjectivity" as a negative quality with respect to principles intended
for a Friendly AI.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT