From: Joshua Fox (joshua@joshuafox.com)
Date: Tue Dec 11 2007 - 03:15:16 MST
Tim,
Thanks for those answers.
Here are my thoughts. I hope they help.
> Having respect equal to compassion (and therefore not having to
> distinguish between them) is the alternative Joshua is talking about. An AI
> with these settings would tend to do "Robin Hood" type behavior, taking from
> one person to give to someone else who needs the resources a little bit
> more. These involuntary transfers could be money, internal organs, or
> anything else of value. Well-informed people who value having higher status
> than their neighbors, and who are winning that game at the moment, would
> want to get rid of the AI.
Humans sometimes have a bias towards inaction ("leave well enough alone,"
"the devil you know," the precautionary principle, sin of commission is
worse than a sin of omission), because of worries about human intentions --
we don't trust people. Also, in any given state, the total space of worse
choices is greater than the space of better choices. People would rather
hang on to the known quantities they have rather than risk the unknowns.
As you say, to the extent that the AI must interact with humans, this makes
sense for the AI. But beyond that, isn't this a bias, and therefore less
than rational?
Exponential discounting fixes the odd behaviors you list, but it adds
> others. If the AI discounts it's utility at 10% per year, and the economy
> measured in dollars is growing at 20% per year, and the dollar cost of
> utility is constant, then the AI will defer all gratification
> until circumstances change. The people who the AI is nominally serving
> might not like that.
But the AI has respect/compassion, and so would take into account the
humans' own discount rate -- in this case that would imply not deferring the
action so much.
Joshua
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT