Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Norman Noman (overturnedchair@gmail.com)
Date: Thu Aug 23 2007 - 10:24:07 MDT


On 8/23/07, Stathis Papaioannou <stathisp@gmail.com> wrote:

> Yes: if it's a perfectly transparent (perfectly opaque?) simulation,
> there is no way of testing any hypothesis you may come up with about
> the simulators. Maybe the simulators are honest about the threat of
> retribution, maybe they're just pretending, or maybe there are no
> simulators.

Just because you can't test hypotheses doesn't mean you have no information.
The prior probabilities are different. In the plan outlined in this
conversation, running the simulation and not including retribution would be
totally pointless.

Each of these possibilities and every other possibility is
> perfectly compatible with all available evidence, and there is no way
> (by definition) of obtaining further evidence to help you decide which
> is more likely. It would be like trying to guess God's shoe size in
> the absence of any divine revelation.

My guess at god's shoe size is that god doesn't exist. The existence of god,
and thus of his feet, is NOT perfectly compatible with all available
evidence. With the knowledge of what is contained within a hermetic
simulation, the beings within that simulation can legitimately assign
different probabilities to different theories of what is outside.

On 8/23/07, Mika Letonsaari <mika.letonsaari@gmail.com> wrote:

> When I've asked my religious friends, they usually say they can't be
> sure if the Bible for example is true or not. They say it might be
> small possibility, but it's better be safe than sorry (and being
> religious doesn't carry much cost in some social groups, it can be
> even benefit locally). So they assign a low probability too.

The problem with pascal's wager is that there is every possible god, with
every possible system of judgement, is equally likely to exist. The same is
not true of AIs, because rather than springing from nothing, they are
designed by people, with evolved, and thus predictable, values.

And why should we be interested in AI which gives so small a
> probability that it doesn't act accordingly?

The probability would really not have to be very big in order for this to
work, because saving humanity would require only a tiny, tiny investment for
the AI. The earth makes up only about 1/10,000,000,000,000,000,000,000,000
of the raw material in the observable universe.

We are interested to make
> AI also behave so that it doesn't eat us. Which is to create a bogus
> myth about there being someone who can punish wrong-doers.

It's not a bogus myth, it can actually be done.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT