RE: Volitional Morality and Action Judgement

From: Mike (mikew12345@cox.net)
Date: Mon May 17 2004 - 23:17:19 MDT


>>Sure, sometimes primitive mental programming gets switched on and all
>>hell breaks loose (competition for mates, war, etc.) but most of the
>>time people trade, negotiate, make deals, treaties and agreements -
>>they sometimes even come up with win-win solutions (gasp!). If mere
>>humans can solve this problem, and on occasion solve it well, then an
>>FAI should not find it too difficult to facilitate.

>The problem is not negotiating between competing entities, but deciding

>which one's viewpoint you want to adopt.

Why? When there's conflict between multiple parties, there's a whole
range
of possible solutions. One is as you state, selecting one viewpoint and
ignoring the others. But there's generally a whole range of in-between
consensus solutions that satisfy all parties "enough". The more
intelligent
and omniscient the AI, the more ve'll be able to help the conflicting
parties to find an agreeable solution. Just because one of the parties
can't see a compromise solution doesn't mean such a solution doesn't
exist.



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST