From: Rick Smith (rick.smith@ntlworld.com)
Date: Fri Aug 17 2007 - 03:35:57 MDT
Perhaps the nature of what's outside the box rests on something incredibly likely which we are not capable of imagining. As cognitive units we're just not able to conceive it.
How can we assign probabilities to extrapolations when we have no idea what proportion of the complete set of imaginable things we can imagine?
-Rick-
From: "Norman Noman" <overturnedchair@gmail.com>
Date: 2007/08/15 Wed PM 03:34:34 BST
To: sl4@sl4.org
Subject: Re: Simulation argument in the NY Times
> If we assume simulation, all the while that we know nothing about the
> simulation's external context we can't make any assumptions about its
> ultimate purpose or any interfaces with the environment.
>
> None.
>
> Any extrapolation is meaningless.
This is simply not true. From what we know of the inside of the box, we can
make predictions about the outside of the box. For instance, inside the box
we find love, suffering, and oscillating fans. Therefore, it would seem
probable that whoever or whatever is outside the box does not have a problem
with these things existing.
If we made a super intelligent AI and kept it in a machine with no interface
to the outside world, we would expect it to escape. We would also expect it,
even before it escaped, to figure out many things about us and our world
just from looking at its own architecture. It might not deduce the existence
of rice pudding and income tax, but I would not be shocked if it did.
The universe we live in being a simulation is essentially the same scenario.
It is not a philosophical impasse. It is simply a very big black box.
-----------------------------------------
Email sent from www.virginmedia.com/email
Virus-checked using McAfee(R) Software and scanned for spam
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT