Re: What is stability in a FAI? (was Re: UCaRtMaAI paper)

From: Stefan Pernar (stefan.pernar@gmail.com)
Date: Sat Nov 24 2007 - 18:13:12 MST


On Nov 25, 2007 6:46 AM, Tim Freeman <tim@fungible.com> wrote:

> From: "Wei Dai" <weidai@weidai.com>
> >To take the simplest example, suppose I get a group of friends together
> and
> >we all tell the AI, "at the end of this planning period please replace
> >yourself with an AI that serves only us." The rest of humanity does not
> know
> >about this, so they don't do anything that would let the AI infer that
> they
> >would assign this outcome a low utility.
>
> Good example. It points to the main flaw in the scheme -- I can't
> prove it's stable, and a solution to the Friendly AI problem has to be
> stable. Here "stability" roughly means that our Friendly AI isn't
> going to construct an unfriendly AI and then allow the new one to take
> over. However, if I look more closely, I don't know what "stable"
> means.
>

For an AI to be friendly it would have to want to be friendly. The question
if friendly AI is possible is equivalent to asking if one can rationally
want to be friendly.

I wrote a paper that proves that friendliness is an emerging phenomenon
among interacting goal driven agents under evolutionary condition.

The paper is available as PDF under Practical Benevolence - a Rational
Philosophy of Morality<http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-11-17_isotemp.pdf>

Kind regards,

Stefan

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT