RE: Universalising an AGI's duty of care

From: H C (lphege@hotmail.com)
Date: Sun Jul 17 2005 - 20:21:40 MDT


[speculation]

I imagine that any real Friendly RPOP (really powerful optimization process
aka FAI) would probably be concerned/interested in the affairs of other
sentient existence. Even if the FAI intrinsically isn't, humans probably
would be- thus the FAI would be as well.

[/speculation]

Unless of course we are talking about aliens who'se primary goals in their
existence are inherently opposed to humans in such a way that it is
impossible to live with them without destruction or alteration of the
fundamental desires of one race or the other.

So, ultimately, I disagree with you.

>From: "Philip Sutton" <Philip.Sutton@green-innovations.asn.au>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Universalising an AGI's duty of care
>Date: Mon, 18 Jul 2005 11:00:20 +1000
>
>A lot of people are still framing their discussion of friendliness goals
>around
>prime consideration of humans.
>
>How about adopting another frame........
>
>I reckon we should start from the perspective of a person who is advising
>the makers of AGIs in another galaxy. What friendliness goals would we
>recommend that they adopt?
>
>Taking this perspective enables us to more easily and automatically think
>in
>a 'universal' way rather than being caught in the particularities of the
>earth
>and humans. If AGIs are capable of developing awesome powers to shape
>events in many parts of the universe than we should be concerned about
>the friendliness rules of AGIs originating in other parts of the universe.
>The
>rules that we would wish that these distant entities to incorporate in
>their
>AGIs might well cover much of the ground that we should incorporate in
>AGIs that we develop here.
>
>Cheers, Philip
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT