Can't afford to resuce cows (was Re: Arbitrarily decide who benefits)

From: Peter C. McCluskey (pcm@rahul.net)
Date: Fri Apr 18 2008 - 13:35:46 MDT


 tim@fungible.com (Tim Freeman) writes:
>But nevermind that, I'm too conflict-averse to make the attempt. The
>Buddhists say cows (and other mammals) are conscious. If humans eat
>cows, and my AI is influenced more by empathy for sentient beings than
>by respect for cow butcherers, it will try to stop the cows from being
>eaten. But the problem is that the humans have guns and will start
>shooting at the AI (or its implementor) if it stops them from kiling
>and eating cows. In contrast, the cows do not have guns. So trying
>to save the cows would make the AI (and its implementor) targets for
>no political benefit.

 Conflict aversion seems like a poor reason for excluding cows from your
AI's utility function.
 I'm rather apathetic about whether cows are included in your utility
function, but CEV comes closer than your approach to capturing what I
want an AI to do.
 A conflict aversion strategy may be exploitable, in that it appears to
encourage people to threaten conflict in order to influence your choice
of a utility function (in this case, encouraging PETA types to convince you
they're more willing and able to endanger your AI than they would be if
you committed to ignoring such threats).
 Any well designed AI ought to be able to derive an appropriate level of
conflict aversion from a utility function that contains no explicit
conflict aversion.

 tim@fungible.com (Tim Freeman) writes:
>No, I didn't ask that and am not interested in that. We want what we
>want. So far as I can tell, talk about what we should want just leads
>to hypocrisy. People say they should want things that sound nice but
>are entirely unconnected with what they do want, and there's no way to
>form a connection, so it's just hot air.

 I don't see what you claim to see. I see a lot of hypocrisy related to
wants, but I don't see hypocrisy that is better explained by problems
associated with "wanting to want" than it is by conflicts associated with
what we currently want.
 What I want seems to include a desire to alter some of my current wants
(e.g. I want to eliminate any logical inconsistencies that may exist in
my current set of wants), so you appear to be excluding some of my wants
from your AI's utility function.

-- 
------------------------------------------------------------------------------
Peter McCluskey         | When someone is honestly 55% right, that's very good
www.bayesianinvestor.com| Whoever says he's 100% right is a fanatic


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT