From: Mark Waser (mwaser@cox.net)
Date: Wed Mar 19 2008 - 19:04:19 MDT
>> On the other list, I argued that the Friendliest response possible to a
>> win-win ruse was to
>> a) repeatedly declare that anyone caught running such a ruse HAD TO and
>> WOULD be punished by extracting restitution to the exact extent of the
>> expected utility of their ruse PLUS not be eligible to receive similar
>> restitution from others (by being forced to give such to a common pool to
>> pay the expenses of those extracting the restitution) for an equal amount
>> of utility
>> AND
>> b) *always* follow through on the declaration
>>
>> That's the best/only way that I can currently come up with to make such
>> a ruse unpalatable to an intelligent self-interested unFriendly.
>
> That - like all reciprocal checks on unFriendliness - relies on _being
> able to punish_, which requires that all entities are more or less
> similarly powerful, and no entity can kill a really large number of
> others before being stopped. It works well enough between humans, not
> at all between humans and superintelligences.
Actually, I also handled that on the other list at
http://www.listbox.com/member/archive/303/2008/03/sort/time_rev/page/3/entry/16:298
The scenario shows how a friendly little planet could conceivably protect
itself against a much larger alien invasion. Personally, I think that it
could be turned into an awesome science fiction story.
It does not work, however, if the smaller is too well known by the larger so
the smaller *really* needs to infect the larger with the Friendly meme
before it learns TOO much.
Fortunately, I'm quite sure that ethics/Friendliness is an Omohundro drive.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT