From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Apr 21 2008 - 15:37:59 MDT
--- Vladimir Nesov <robotact@gmail.com> wrote:
> Matt,
>
> You talk about these utilities (with underspecified meaning) as if
> people actually choose their decisions based on them, as if they hold
> causal powers. But in fact, it's the opposite: utilities are a way to
> roughly model human behavior. At best this formalism can be considered
> as a way to describe an ideal utilitarian AI. People are not
> fitness-maximizers or utility-maximizers, they are a hack of
> adaptation-executing. They learn when to be happy, and when suicidal,
> depending on context (not that it's easy to control such learning).
> You use "U(x)" thingie like "phlogiston" to create an illusion of
> justified argument.
Yes, I know a utility function is just a model. These tests of happiness only
measure how people answer the question "are you happy?" It is like if someone
got in a car accident and was severely injured and says "I was lucky. I could
have been killed". Was he really lucky?
When people say they are happy, they are comparing their situation to some
imagined situation. So what are you really measuring?
But my point is that if you expect AGI to make boundless happiness possible, I
think you will be disappointed. How would you program a brain, or any
intelligent system, to experience accumulated happiness that grows without
bound? I don't think you could do it on humans with drugs, wireheading,
simulation, or neural reprogramming. If you think it is possible for any
intelligent system, then define the system, define happiness, and show me the
code.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT