Re: Reactions to Bostrom's Simulation Essay

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Thu Mar 20 2003 - 08:37:38 MST


On Thu, Mar 20, 2003 at 12:53:00PM +0200, Aleksei Riikonen wrote:
> Robin Lee Powell wrote:
> > I doubt any of this is original to me, and it's certainly not well
> > written, but I wanted to share it anyways.
> >
> > http://www.digitalkingdom.org/~rlpowell/rants/simulation_errors.html
>
> You seemed to hold the following views:
>
> (1) Bostrom is drastically underestimating computational requirements,
> and thus his simulation argument is incorrect.
>
> (2) For ethical reasons, posthuman civilizations will probably refrain
> from running ancestor simulations, and thus Bostrom's simulation
> argument is incorrect.
>
>
> A comment on view (2):
>
> I don't think that Bostrom argued that posthuman civilizations
> necessarily would run ancestor simulations. Rather, he argued that if
> we reach a posthuman stage, and _if_ we end up running ancestor
> simulations, then it is likely that we are currently living in a
> simulation.
>
> Thus (2) is not a counterargument against Bostrom, but a
> counterargument against a view not presented by Bostrom in "Are You
> Living In a Computer Simulation?"

That's an excellent point. Thank you. I'll modify the page.

> A comment on view (1):
>
> > [...] an AI which is capable of observing all the humans in the
> > simulation at once and feeding them only the data they need [...] We
> > can't even imagine how much computing power such an AI would have to
> > have, so I won't even try. I will suggest, however, that such an
> > AI's computing needs would dwarf the computing needs of the human
> > brains in the simulations by dozens of orders of magnitude.
>
> You present no grounds for this suggestion, and I don't see why
> Bostrom would be wrong in assuming that the computing power needed for
> the process of selecting what to simulate precisely and what to
> "fudge" would be rather trivial.

I think it's a ridiculous thing to speculate on. He could be right, I
could be right, we could both be incredibly, mind-numblingly wrong in
one direction or another.

I simply think that an intelligence with the capacity to keep the state
of billions of other intelligences in its head (and collate and manage
those states and the states of those intelligences' environments and so
on and so on) would be many, many times more complicated than all of the
intelligences it is managing combined.

I have no evidence for this. I don't think evidence for this is even
*remotely* possible. It's a belief, nothing more, and I'm reluctant to
pretend otherwise by attempting to argue it.

I will, however, add all this commentary to my page.

Thanks for you comments.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/    ***    I'm a *male* Robin.
.i le pamoi velru'e zo'u crepu le plibu taxfu
.i le remoi velru'e zo'u mo .i le cimoi velru'e zo'u ba'e prali .uisai
http://www.lojban.org/   ***   to sa'a cu'u lei pibyta'u cridrnoma toi


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT