Maximizing vs proving friendliness

From: Stefan Pernar (stefan.pernar@gmail.com)
Date: Mon Apr 28 2008 - 19:13:39 MDT


On Tue, Apr 29, 2008 at 7:57 AM, Thomas McCabe <pphysics141@gmail.com>
wrote:

> On Mon, Apr 28, 2008 at 7:37 PM, Stefan Pernar <stefan.pernar@gmail.com>
> wrote:
> > On Tue, Apr 29, 2008 at 7:10 AM, Thomas McCabe <pphysics141@gmail.com>
> > wrote:
> >
> >
> > Interesting thought experiment. However perfect friendliness under all
> > circumstances really is not the goal aimed for as it is an unrealistic
> and
> > unobtainable ideal.
> >
>
> By "perfect friendliness", I mean that the FAI should always make the
> Friendly decision, not that the outcome should always be Friendly
> (which is impossible; see the debate at
> http://www.vetta.org/2006/09/friendly-ai-is-bunk/).
>

The philosophical point that friendliness is inherently limited is well
taken. For practical purposes however I think it is important to aim for 'as
friendly as possible'.

Assumption:
Intelligence is defined as an agent's ability to maximize a given utility
function.
Friendliness can be expressed as an agent's utility function.

Conclusion:
An agent, who's utility function is to be friendly will be friendlier the
more intelligent it becomes.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT