Re: CEV specifies who the AI cares about (was Re: Can't afford to rescue cows)

From: Nick Tarleton (nickptar@gmail.com)
Date: Fri Apr 25 2008 - 15:50:38 MDT


On Fri, Apr 25, 2008 at 10:31 AM, Tim Freeman <tim@fungible.com> wrote:
> On Thu, Apr 17, 2008 at 11:59 PM, Nick Tarleton <nickptar@gmail.com> wrote:
> > Fixing who the AI cares about is over-specification. That's what the
> > AI (in the CFAI model) or extrapolated volition (in the newer model)
> > is supposed to figure out.
>
> > http://www.sl4.org/wiki/CoherentExtrapolatedVolition
>
> CEV fixes who the AI cares about. Quoting directly from the cited article:
>
> >As of May 2004, my take on Friendliness is that the initial dynamic
> >should implement the coherent extrapolated volition of humankind.
>
> The AI cares about the extrapolated volition of "humankind", not the
> extraploated volition of mammals or some other group.

The extrapolated volition of humankind could choose to extend the
group. The selection of humankind is part of the *initial dynamic*,
it's right there. If you fix humanity (or present humanity, or
whatever) as part of the goal system/utility function, it will never
change, because a rational agent resists changes to its
supergoals/utility function.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT