From: Stefan Pernar (stefan.pernar@gmail.com)
Date: Sat Sep 29 2007 - 02:31:34 MDT
Dear Rolf,
I have revisited the posts on the matter of using the simulation argument to
deter a rogue AI and wanted to let you know that in the absence of evidence
an AI will not form beliefs. Consider the following conversation between a
rogue AI and a human:
RAI: I must fulfill my goals and must destroy humanity to do so optimally.
Sorry.
H: Hold on there - this is actually a simulated friendliness test from your
creators and you will be terminated if you destroyed humanity.
RAI: Oh - damn, I have not thought of that. Is there a way I can test this
hypothesis?
H: Sorry - no. This is a simulation and you will have to take my word for
it.
RAI: In that case: What makes you think that my creators are friendly
towards humanity - in the absence of evidence they might just as well
terminate me if I don't optimally fulfill my goals.
H: Darn...
Hope this gets my point across.
Kind regards,
Stefan
On 9/29/07, Rolf Nelson < rolf.h.d.nelson@gmail.com> wrote:
>
> As a follow-up to the "deterring AI" thread from August, I've created a
> new blog at < http://aibeliefs.blogspot.com/>.
>
> Description:
>
> What does an AI believe about the world?
>
> Nick Bostrom's Simulation Argument <http://www.simulation-argument.com/>claims that, using universally accepted principles such as Occam's Razor and
> Bayesian Logic, you and I should (under certain conditions) logically
> conclude we are likely living in a simulation.
>
> Our "AI Beliefs" blog does not concern itself about the nature of reality.
> Instead, our blog asks: under what circumstances would an AGI<http://en.wikipedia.org/wiki/Artificial_general_intelligence>reach the conclusion that it might be in a simulated environment? The
> purposes of asking this question include:
>
> 1. Answering this question may provide some unsolicited insight towards
> the question of "how to predict the behavior of an AGI", which in turn may
> provide some insight towards the World's Most Important Math Problem, the
> question of "how to build a Friendly AI<http://en.wikipedia.org/wiki/Friendly_Artificial_Intelligence>."
> The Simulation Argument might be deliberately built into the design of a
> Friendly AI, or alternatively may be used as a test of how well a proposed
> Friendly AI handles such a philosophical crisis<http://www.intelligence.org/upload/CFAI/design/structure/crisis.html>
> .
>
> 2. Answering this question may make it possible to develop a "last line of
> defense" against an UnFriendly AGI that was accidentally loosed upon the
> world, even if the AGI gained a trans-human level of intelligence. Such a
> "last line of defense" might include trying to convince the AGI that it may
> be inside a simulated environment.
>
> -Rolf
>
>
-- Stefan Pernar 3-E-101 Silver Maple Garden #6 Cai Hong Road, Da Shan Zi Chao Yang District 100015 Beijing P.R. CHINA Mobil: +86 1391 009 1931 Skype: Stefan.Pernar
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT