From: Tim Freeman (tim@fungible.com)
Date: Sat Oct 24 2009 - 17:54:37 MDT
From: Robin Lee Powell <rlpowell@digitalkingdom.org>
>I would hope that the extrapolation would include extrapolating the
>actions of the AI; like saying, "Hey, there's a bug that's going to
>make you suicidal in a few years; you want I shoud fix that?".
I would hope that too, but what you and I hope for isn't relevant.
The question we're discussing is, would CEV do what we want?
Arguments of the form "We want X so CEV must do it" aren't part of
answering that question. That argument presupposes that CEV would do
what we want, which is the question we started with.
I don't know what you mean by "include extrapolating the actions of
the AI" in extrapolating human volition. Specifically, I have no
meaning for "include" that makes sense in this context -- what does it
mean to include one extrapolation in another? Furthermore, I see no
similarity between extrapolating (the consequences of?) actions and
extrapolating volition, so I get confused when you use the word
"extrapolating" for both.
>Wait, what? That's a total failure to enact the result; that's
>sub-goal stomp of the worst kind. Doing that guarantees that Mpebna
>will never get to the Nice Place To Live that CEV envisioned, so
>it's a stupid action, contrary to the point of CEV.
I see you aren't quoting the definition of CEV when you're arguing
about what it means. Is the proposed scenario consistent with the
definition, or not?
Maybe the answer is "we don't know", which points at another basic
problem with CEV. It's expressed in English and we can have pointless
arguments forever about what it means.
>The point of extrapolating is to stop CEV from doing things to make
>people happy now that would prevent them from getting the best
>outcome in the future.
Well, if that makes sense then that's what you want. I've heard other
people claim to want the same thing. I suppose an AI that's giving
people a weighted average of what they want now would do that if
people like you got a high enough weight in the average. Be aware
that other people really do value the present more than the future,
and you'll be in conflict with them.
I don't understand how to define an ethical system that doesn't
sometimes trade the future for the present. If I prefer outcome X
over outcome Y, then there has to be some observation made at a
specific time that lets me know if X or Y happened, and the future
after that point doesn't matter. I can care about the infinite future
if I have weighted collection of preferences for events at different
times, but the sums have to converge otherwise it makes no sense, and
if you don't trade the future for the present at some point the sums
don't converge. Peter de Blanc gave a talk about this at one of those
SIAI intern's dinners -- it looks like his paper, which I have not
understood in detail, is here:
http://adsabs.harvard.edu/abs/2009arXiv0907.5598D
I'm not sure if you meant "the best" or "a good enough" there. If you
really meant "the best", we disagree. One problem with "the best" is
that we don't have a definition for it. Another problem is that the
best is the enemy of good enough. I'd rather get a good-enough
outcome with a well-defined and understandable procedure than try to
get "the best" outcome with something that seems vague and
unpredictable. If you want to solve it, you must regard it as an
engineering problem, not a math or science problem.
>If it turns out, for example, that future humans will have a shared
>morality of tolerance and mercy; if CEV simply did whatever current
>humans want, then it might (for example) find and utterly destroy all
>the remaining Nazi officers hiding out in various places, an action
>that future humans would predictably abhor, and that cannot be
>un-done. A crappy example, admittedly, but the point is just that
>without extrapolation we can't avoid permanent affects we might regret
>later.
I agree that my procedure is prone to this sort of bug, where if
enough people want person X dead now, the AI will go murder X for
them. I'd like to see a solution to that, but I can't make enough
sense of Extrpolation in CEV to use it as a solution.
-- Tim Freeman http://www.fungible.com tim@fungible.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT