From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 04 2004 - 12:25:29 MDT
Michael Anissimov wrote:
> Ben Goertzel wrote:
>
>> I don't want some AI program, created by you guys or anyone else,
>> imposing its inference of my "volition" upon me.
>
> The word "imposing" suggests something out of line with your volition.
> But the whole point of any FAI is to carry out your volition. If the
> volition it is carrying out is unfavorable and foreign to you, then that
> would constitute a failure on the part of the programmers. The point is
> to carry out your orders in such a way that the *intent* takes
> precedence over the *letter* of your requests. Imagine a continuum of
> AIs, one extreme paying attention to nothing but the letter of your
> requests, the other extreme carrying your intent too far to the point
> where you disapprove. The task of the FAI programmer is to create an
> initial dynamic that rests appropriately between these two extremes.
No, Ben's got a legitimate concern there. Remember that the purpose of the
initial dynamic is to choose a dynamic, based on a majority vote of
extrapolated humanity, and that this majority vote could conceivably do
something to Ben he doesn't like. I gave extensive reasons in Collective
Volition for why this can't be pre-emptively ruled out by the programmers
(i.e., it also rules out infants growing up into humans rather than
super-infants, the programmers must decide For All Time what constitutes a
sentient being, the initial dynamic doesn't have full freedom to rewrite
itself, and so on). It's under the heading of why the initial dynamic
can't include a programmer-decided Bill of Rights. Ultimately it boils
down to moral caution: If you pick ten Rights, three will be Wrong.
>> When I enounced the three values of Joy, Growth and Choice in a recent
>> essay, I really meant *choice* -- i.e., I meant *what I choose, now, me
>> being who I am*. I didn't mean *what I would choose if I were what I
>> think I'd like to be*, which is my understanding of Eliezer's current
>> notion of "volition."
Yes, I understand. But I don't trust people's unvarnished choices, to
accomplish what they wish, even to not kill them outright. People aren't
cautious enough. I called this "Murder by genie bottle" and I meant it.
The power to tear apart a god like tinfoil is too much power. The power
must be kept out of the hands of corporations, governments, the original
programmers, the human species itself until it has a chance to grow up.
Otherwise we're gonna die, murder by genie bottle. And yet we need SI, to
protect us from UFAI, to render emergency first aid and perhaps do other
things we can't comprehend. Collective Volition is my current proposal for
cutting the knot. Yes, it is scary. And yes, there is a possibility that
what you regard as your individual rights will be violated; I can't rule
that out with personally exercising more control over the future than I
dare. All I can do is plan verification processes, ways to make sure that
we end up in a Nice Place to Live.
>> To have some AI program extrapolate from my brain what it estimates I'd
>> like to be, and then modify the universe according to the choices this
>> estimated Ben's-ideal-of-Ben would make (along with the estimated
>> choices of others) --- this denies me the right to be human, to grow and
>> change and learn. According to my personal value system, this is not a
>> good thing at all.
For humanity to grow up together, choose our work and do it ourselves? Let
not our collective volition deprive us of our destiny? That is also my
wish, in this passing moment, and the dynamics of Collective Volition are
designed to take exactly that sort of wish into account.
>> Eventually this series might converge, or it might not. Suppose the
>> series doesn't converge, then which point in the iteration does the AI
>> choose as "Ben's volition"? Does it average over all the terms in the
>> series? Egads again.
>
> Good question! The ultimate answer will be for the FAI to decide, and
> we want to seed the FAI with the moral complexity necessary to make that
> decision with transhuman wisdom and compassion. Eliezer and Co. won't
> be specifying the answer in the code.
No, that's exactly the sort of thing that has to be specified in an initial
dynamic. The initial dynamic isn't permanent, but it needs an *initial*
specification. This is discussed in Collective Volition, albeit only
peripherally, because the details of this process are exactly the sort of
thing that I'm likely to change with improving technical understanding.
But there's enough discussion in CV to show the problem exists and that I
know it exists.
In answer to Ben's question, the extrapolation doesn't 'pick one term' or
'average over' terms, it superposes terms and calculates the distance and
chaos.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT