Re: Why extrapolate? (was Re: [sl4] to-do list for strong, nice AI)

From: Robin Lee Powell (
Date: Sun Oct 25 2009 - 15:08:41 MDT

On Sat, Oct 24, 2009 at 04:54:37PM -0700, Tim Freeman wrote:
> From: Robin Lee Powell <>
> >I would hope that the extrapolation would include extrapolating
> >the actions of the AI; like saying, "Hey, there's a bug that's
> >going to make you suicidal in a few years; you want I shoud fix
> >that?".
> I would hope that too, but what you and I hope for isn't relevant.
> The question we're discussing is, would CEV do what we want?
> Arguments of the form "We want X so CEV must do it" aren't part of
> answering that question. That argument presupposes that CEV would
> do what we want, which is the question we started with.

Why would someone design CEV to *not* do what we want, though? 0.o

The *entire point* is to end up with what we want. CEV is just some
ideas about what that might look like.

Whether CEV, taken literally as written, actually gets us what we
want, I don't have a strong opinion on; I didn't realize that was
what you were interested in. I thought you were asking "In getting
us what we want, why is extrapolation useful?".


They say:  "The first AIs will be built by the military as weapons."
And I'm  thinking:  "Does it even occur to you to try for something
other  than  the default  outcome?"  See ***

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT