Re: Universalising an AGI's duty of care

From: H C (lphege@hotmail.com)
Date: Mon Jul 18 2005 - 22:48:47 MDT


Ender's game series, what a great source of inspiration (.. ahem ^_^)

Given FAI, I think humans would have the means to defend themselves against
aliens (of lesser intelligence than the FAI!), however, there would be no
reason to ATTACK the aliens, unless the only means of defense was attack.

Thus…

Essentially the way I see it is that our first primary problem is FAI (due
primarly to UFAI threats, however besides that AI seems to be the most
optimal way to go about things), whereas our next problem is the "big
freeze" (which is the ultimate threat to the immortalist way of life). An
intermediate problem of those two would also be the threat of “Other”
Unfriendly Intelligence- aka Evil Aliens. Even if we have an all powerfully
smart FAI, there is still the threat of an even more all powerfully smart
Evil Alien Intelligence.

>From: Joel Pitt <joel.pitt@gmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Universalising an AGI's duty of care
>Date: Tue, 19 Jul 2005 13:45:19 +1200
>
>Philip,
>
>I think that these golden rules are not something I want to have alien
>species use in judging how they treat me. For instance - a species may
>believe in life sacrifice and that it is the highest honour, however unless
>I believe in their spirituality I don't want an FAI deeming it okay because
>it fits these golden rules (presumably any of the aliens would volunteer
>and *want* to be the sacrificial individual). Those that have read the
>Ender's saga by Orson Scott Card may recognise this scenario.
>
>When other sentient beings actions *intentionally* put the primary
>interests (i.e. existance) of other individuals at risk, then they should
>be prevented from doing so.
>
>-Joel
>
>
>Philip Sutton wrote:
>>I don't think it's a coincidence that golden rules are common in the most
>>widespread human philosophies eg. "do unto other as you would have them
>>do unto you" (golden rule 1) or "don't do unto others what they would have
>>you not do unto them" (golden rule 2). Things seem to work reasonably
>>well when diverse cultures have contacts that are governed by these sorts
>>of rules. I imagine it wouldn't be impossible to implement these ideas in
>>any galaxy where there are sentients clever enough to create AGIs exist.
>>Golden rule 1 can guide action even if you know nothing about the other
>>sentients you are contacting (which means it is not a fail safe rule).
>>Golden Rule 2 means you need to build understanding of any sentients you
>>contact before taking action that could have a potential to violate the
>>rule. Golden rule 2 would require an attitude of forberance, and patience
>>and of careful learning.
>>
>>By the way, the goldern rules were invented to inject a bit of
>>friendliness into natural general intelligences where contact between
>>non-kin or out-groups occurred - especially in larger rural and early
>>urban communities.
>>
>>My guess is that AGIs on earth that adhered to both these rules would have
>>at least a basic level of friendliness.
>>
>>Cheers, Philip
>>
>>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT