Re: New website: The Simulation Argument

From: Jeff Bone (jbone@jump.net)
Date: Mon Dec 03 2001 - 23:23:37 MST


"Eliezer S. Yudkowsky" wrote:

> Jeff Bone wrote:
> >
> > To be more clear: while Friendly scenarios are one "singleton" outcome that
> > supports the simulation stats, strictly speaking simulation (second order) is
> > apparently out-of-scope for Eli's def. of Friendly. (I would argue that Friendly
> > is in some sense neccessarily a de novo or first-order simulation.) Regardless,
> > even for loose defs of Friendly, it would seem that the majority of singleton
> > outcomes are non-Friendly.
>
> I don't see how this reasoning operates.

No wonder. ;-) You're right, that was about as clear as mud --- had a couple of
glasses of Shiraz with dinner last night and apparently there was some problem between
brain and fingertips. ;-) Let me try again. There's about three strands entagled
there, two not really directly on topic. What I'm getting at is this: most
conceivable paths through the singularity do *not* result in egalitarian Friendliness,
but rather result in shrieks followed, perhaps, by periods of tyrannical control of the
non-ascended locals by those or the one who ascends.

Eli's strict definition of Friendliness seems to disfavor simulation; Nick's stats
seem to favor the idea that we're currently in a simulation; if that is the case ---
if we are likely in a simulation --- then it is likely that our simulators are not
Friendly but rather something else. Perhaps this speaks to the very viability of the
concept of Friendliness, in the canonical Eli interpretation.

> I don't see how one can use this argument to reason about the specific
> probability of a simulating PSC as the outcome of a particular case. That
> probability is one of the Bayesian priors; an unknown Bayesian prior, but
> a Bayesian prior nonetheless.

This is a good challenge; I'm going to go jot down a quantification of this argument
and see if I can't get the stats to argue for me. ;-)

> I also don't understand why you say that Friendly AI "is" a simulation,

The Friendly Sysop needs to do exactly the same kind of modeling, etc. that a pure
simulation would do in order to fulfill its protective role; its internal model at
time T must be a predictive model of what will happen in the external world at T+1 in
order to take whatever proactive behavior is necessary to prevent e.g. unwanted harm,
death, etc. The analogy is complicated a bit because this simulation is "convolved"
with physical reality, but the Sysop's internal state is indeed a highly detailed and
comprehensive simulation of the physical environment, and that simulation molds
activity in the physical world. This differs from the "model" maintained by e.g. a
human individual, as our instantaneous model need only cover our own immediate locale
in any degree of detail; the Sysop, on the other hand, must maintain a similarly
comprehensive model of *every* locale a protected human must be in.

> why you say that Friendly SI is a singleton scenario that supports
> simulation (I should think it would exclude it completely),

Supports in the sense that any technology that enables Friendliness is likely
sufficient to do pure anscestor or other civilization simulation.

> or why you say
> that the majority of singleton outcomes are non-Friendly.

Simply because they are. You yourself have previously described a number of paths
through Singularity that result in non-Friendliness, as have many others. I'm not
speaking directly of likelihood, rather of the number of Friendly outcomes of
Singularity (i.e., 1) as a fraction of all possible outcomes of Singularity (unknown
but > 1.) If there is even 1 possible outcome other than Friendliness, then
Friendliness does not represent a majority of all possible outcomes.

> None of these
> are required by Bostrom's argument.

Nope, that's true.

jb



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT