From: Tim Freeman (tim@fungible.com)
Date: Sat Oct 24 2009 - 10:37:25 MDT
From: Matt Mahoney <matmahoney@yahoo.com>
>For another proposed definition of Friendliness, see
>http://intelligence.org/upload/CEV.html
I get parts of CEV. "Volition" obviously must be understood by the
AI, and the "Coherent" part is similar enough to the averaging in my
algorithm that I don't care about the distinction.
I also get part of the "Extrapolation" step. If someone wants X
because it will let them achieve Y, but they have false beliefs and X
won't really let them achieve Y, then you don't want to give them X
just because they want it. The AI should give people what they would
want if they had true beliefs, not what they actually want. That's
one part of Elizer's "Extrapolated" concept.
But there's more to "Extrapolated" than that. Quoting from
http://intelligence.org/upload/CEV.html as read on 24 Oct 2009:
In poetic terms, our coherent extrapolated volition is our wish if we
knew more, thought faster, were more the people we wished we were, had
grown up farther together; where the extrapolation converges rather
than diverges, where our wishes cohere rather than interfere;
extrapolated as we wish that extrapolated, interpreted as we wish that
interpreted.
"Knew more" and "thought faster" are close enough to "if they had true
beliefs" that I don't care about the difference.
But the other contrafactual things don't seem desirable:
"were more the people we wished we were". This brings to mind people
with repressed sexuality who want sex but think sex is bad so they
don't want to want sex. This is based on a false belief -- sex isn't
bad in general. But this person really wishes they didn't want sex.
"had grown up farther together": there are toxic people who, if I had
grown up farther with them, I'd be completely useless. This became
utterly clear to me as a consequence of my first marriage. This part
of Extrapolation is just bad.
In general, I want what I want, and except when the AI knows I'm
mistaken about facts, I therefore want the AI to give me what I want.
That's the "Volition" part. There are other people so there has to be
some compromise between what everyone wants so the AI can do one
thing; that's the "Coherent" part. Other than compensating for
mistaken beliefs, I don't see any use for the "Extrapolation" part. I
don't want the AI catering to hypothetical ideal people, I want to the
AI to give real people what they want.
Can anyone make a decent case for these dubious parts of Extrapolation?
-- Tim Freeman http://www.fungible.com tim@fungible.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT