From: Samantha Atkins (samantha@objectent.com)
Date: Sun Jun 13 2004 - 01:06:19 MDT
On Jun 12, 2004, at 6:57 PM, Eliezer Yudkowsky wrote:
> This question does appear to keep popping up. Roughly, a collective
> volition is what I get when:
>
> (a) I step back and ask the meta-question of how I decided an earlier
> Eliezer's view of "Friendliness" was "mistaken".
>
> (b) I apply the same meta-question to everyone else on the planet.
>
> Whatever it is that you use, mentally, to consider any alternative to
> collective volition, anything that would be of itself friendlier -
> that's you, a human, making the decision; so now imagine that we take
> you and extrapolate you re-making that decision at a higher level of
> intelligence, knew more, thought faster, more the person etc.
>
Yes, I get that and it is enticing. But precisely how will the FRPOP
gets its bearings as to what is the "direction" of "more the person"?
Some of the others are a bit problematic too. But this one seems the
best and central trick. More the person I would like to be? I, with
all my warts? Wouldn't I have a perhaps badly warped view of what kind
of person I would like to be? Would the person I would like to be make
indeed better choices? How will the AI know of this person or model
this person?
> The benefit of CV is that (a) we aren't stuck with your decision about
> Friendliness forever (b) you don't have to make the decision using
> human-level intelligence.
>
Well, we don't make the decision at all it seems to me. The AI does
based on its extrapolation of our idealized selves. I am not sure
exactly what our inputs would be. What do you have in mind?
> It's easy to see that all those other darned humans can't be trusted,
> but what if we can't trust ourselves either? If you can employ an
> extrapolation powerful enough to leap out of your own fundamental
> errors, you should be able to employ it on all those other darned
> humans too.
Well yes, but it is an "if" isn't it? It is actually a fairly old
spiritual exercise to invoke one's most idealized self and listen to
its advise or let it decide. It takes many forms, some of which put
the idealized self as other than one's self, but the gist is not that
different.
>
> Maybe a better metaphor for collective volition would be that it
> refers questions to an extrapolated adult humankind, or to a
> superposition of the adult humanities we might become.
>
So the AI becomes an adjunct and amplifier of a specialized form of
introspective spiritual exercise? Wild! AI augmented self-improvement
of humankind.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT