From: Norman Noman (overturnedchair@gmail.com)
Date: Mon Aug 27 2007 - 21:45:02 MDT
On 8/27/07, Stathis Papaioannou <stathisp@gmail.com> wrote:
>
> On 27/08/07, Norman Noman <overturnedchair@gmail.com> wrote:
>
> > I'd like to say that CEV would both make people smart enough to realize
> > religion is a load of hooey, and prevent people from threatening each
> other
> > with simulations, but frankly I don't know what CEV does, it seems to be
> > more of a mysterious treasure map than an actual target.
>
> What would the CEV of the Pope or Osama Bin Laden look like? I
> wouldn't discount the possibility of a theocratic FAI, unpleasant
> though it may be to contemplate.
I don't think it's likely, but I wouldn't discount it either. If I was
writing the AI's goals, I would be rather specific in not tolerating willful
ignorance. I guess in saying this I've already strayed into forbidden
"arguing about friendliness" territory, so, moving on...
> > If the movement further stipulates that the simulation
> > > will be recursive - simulations within simulations - you could argue
> > > that you are almost certainly in one of these simulations.
> >
> > Except that, under the hypothesis where everybody and his brother is
> allowed
> > to simulate the universe, there would be billions of recursive
> simulations
> > and you might be in any one of them. The difficulty in calculating the
> > average effect is partially due to complexity, but also due to the basic
> > implausibility of this hypothetical situation.
>
> That's right, and my point is that for this reason the only rational
> course of action is to ignore the possibility of a simulation.
I said it was difficult to calculate, not that it should be ignored. If your
scenario came to pass, although I certainly do not imagine it will, it would
be smart to give the issue considerable attention to say the least.
> In contrast, rolf's plan is quite plausible, because it's something that
> > benefits everyone. Not just humanity and the Friendly AI, but the Rogue
> AI
> > too. If everyone cooperates, then whether mistakes are made or not,
> humanity
> > will be saved and C will be calculated.
>
> I think there would be more people interested in promoting their
> religion or increasing their profits than would be interested in
> making their descendants' future safe from a RAI. This might not be
> rational or moral or whatever, but it's what people would.
It doesn't matter what pre-singularity people want, only what the
post-singularity entity or entities with the power to do the simulations
wants. I find it very difficult to believe that post-singularity big tobacco
and osama bin laden will even exist in any meaningful sense, let alone stay
true to the wildly out of character schemes you suggest they will soon have.
And even if you're RIGHT, and there is a pandemonium of human infighting via
simulation which cancels out to nothing, there is no reason rolf's plan
cannot be implemented as well.
> Are you playing the devil's advocate or do you really think it's even
> > remotely likely that big tobacco would invest in a karmic simulation of
> the
> > universe in order to get people to smoke?
>
> As you put it, everybody and his brother could join in, with the
> result that the only rational action would be to ignore the
> possibility of a simulation.
So, your answer is YES?
(again, please note that I did not say the issue should be ignored, only
that it was very difficult to calculate what action should be taken.)
> I don't see how recursive simulations, if the primary simulator bothers to
> > actually run them at all, would make a difference. They would just be
> more
> > reasons to do the same things already being done.
>
> It makes a difference to the probability calculations. In the simple
> case, if you can be sure that one simulation has been run, you have a
> 1/2 chance of being in that simulation. But if a recursive simulation
> has been run, you have a much higher chance of being in the
> simulation.
If both parties run X simulations, your likelihood of being in one of A's
simulations rather than B's simulations is proportional to the likelihood of
A existing in the first place rather than B. As X goes to infinity, this
likelihood does not change.
And even if it did change...
If an actual Turing machine with infinite cycles available
> to it exists somewhere (and a priori there is no reason to suppose
> that this is impossible, even if it isn't possible in the universe we
> observe), then we might almost certainly be living in a simulation.
>
...there is a priori no reason to suppose it is possible, either. Rationally
we must give not insignificant probability to both cases.
But this realisation should have no effect on our behaviour.
This does not follow.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT