From: Norman Noman (overturnedchair@gmail.com)
Date: Thu Aug 23 2007 - 07:56:26 MDT
On 8/22/07, rolf nelson <rolf.hrld.nelson@gmail.com> wrote:
>
> Obviously, if your design is very unusual and odd and haphazard, then it'd
> be easier to conclude that you are not in the karma simulator - who on earth
> would want to engage in expensive testing of weird and unlikely designs?
>
> I don't agree. The reduction in probability that "someone would bother
> to simulate such an odd design" is offset by the reduced probability
> that "someone would bother to create such an odd design in the
> unsimulated world."
This is an excellent point.
Presumably in 2050 when you discover what P is, it will contain text
> like "if RAI* tries to 'shake the walls' too much to figure out
> whether it's in a simulated environment, we will immediately punish
> RAI* by permanently shutting it down".
>
On 8/23/07, Алексей Турчин <avturchin@mail.ru> wrote:
>
> If I were RAI, I would calculate the probabaility of strong singularity
> with computronium, - 10**-6, no less. And amount of simulations that such
> computorium could create if uses all matter in the Sun System. But to
> simulate RAI it needs a very huge amount calculating power, unless we speak
> here about very young RAI, that has not bulid its own computronium. If that
> young RAI found that there could be say 10**15 simulations of him by the
> computronium, he will multiply it on probability of computronium (10**-6)and
> get that it is 10**9 to 1 that he is in computronium.
>
It would probably be easier for the FAI to short circuit anything the RAI*
did which be expensive to simulate, unless these things were necessary for
the RAI to complete its goal. Whether it's shaking the walls in order to
ascertain whether it's in a simulation, or simply as a consequence of normal
operating procedure, the FAI could simply make up plausible data on the fly,
and hack the parts of RAI* which would notice anything is fishy.
On 8/23/07, Mika Letonsaari <mika.letonsaari@gmail.com> wrote:
>
> Those of you who believe in this argument must be very religious
> people. I mean, isn't this kind of similar to religion, where someone
> is telling you without good evidence, that there is someone capable of
> pulling the plug or doing other nasty stuff if you don't obey his
> will?
There's no evidence, but in this case there is a small but not insignificant
prior probability. The key point is that it's no trouble at all for an RAI
to spare humanity. It's like us lending someone a pencil.
Probably the superuser of the universe wouldn't just pull the plug,
> because he would end the lives of all other intelligences in the
> simulation too.
The FAI wouldn't need to simulate other intelligences in any real detail,
but this could still be a legitimate problem depending on the FAI's exact
goals.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT