RE: Volitional Morality and Action Judgement

From: Keith Henson (hkhenson@rogers.com)
Date: Tue May 18 2004 - 07:17:58 MDT


At 10:17 PM 17/05/04 -0700, you wrote:
> >>Sure, sometimes primitive mental programming gets switched on and all
> >>hell breaks loose (competition for mates, war, etc.) but most of the
> >>time people trade, negotiate, make deals, treaties and agreements -
> >>they sometimes even come up with win-win solutions (gasp!). If mere
> >>humans can solve this problem, and on occasion solve it well, then an
> >>FAI should not find it too difficult to facilitate.
>
> >The problem is not negotiating between competing entities, but deciding
> >which one's viewpoint you want to adopt.
>
>Why? When there's conflict between multiple parties, there's a whole
>range
>of possible solutions. One is as you state, selecting one viewpoint and
>ignoring the others. But there's generally a whole range of in-between
>consensus solutions that satisfy all parties "enough". The more
>intelligent
>and omniscient the AI, the more ve'll be able to help the conflicting
>parties to find an agreeable solution. Just because one of the parties
>can't see a compromise solution doesn't mean such a solution doesn't
>exist.

Maybe. On the other hand, a Friendly AI tasked with deciding if your genes
or your memes should have control of your body might wind up an annoyed,
unfriendly AI and decide the kindest thing to do would be to kill you. :-)

Keith Henson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT