From: Tim Freeman (firstname.lastname@example.org)
Date: Sat Apr 26 2008 - 12:52:33 MDT
On Thu, Apr 17, 2008 at 11:59 PM, Nick Tarleton <email@example.com> wrote:
On Fri, Apr 25, 2008 at 10:31 AM, Tim Freeman <firstname.lastname@example.org> wrote:
> CEV fixes who the AI cares about. Quoting directly from the cited article:
> >As of May 2004, my take on Friendliness is that the initial dynamic
> >should implement the coherent extrapolated volition of humankind.
> The AI cares about the extrapolated volition of "humankind", not the
> extraploated volition of mammals or some other group.
From: "Nick Tarleton" <email@example.com>
>The extrapolated volition of humankind could choose to extend the
>group. The selection of humankind is part of the *initial dynamic*,
>it's right there. If you fix humanity (or present humanity, or
>whatever) as part of the goal system/utility function, it will never
>change, because a rational agent resists changes to its
I agree that rational agents resist changes to their utility function.
I'm not clear about the important difference between CEV and the AI's
If the AI is going to do it, then it's essentially a utility function,
and change to it will be resisted.
Or maybe the intent is that the AI is going to do something else. I
couldn't see much talk of behavior in the CEV page, which is
disturbing given that the purpose of the entire exercise is to
describe what we want the AI to do. If that's the right
interpretation, what is CEV claiming the AI will be doing?
-- Tim Freeman http://www.fungible.com firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:25 MDT