Volition versus decision

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 04 2004 - 13:51:22 MDT


Ben Goertzel wrote:
>>
>> But what if you are horrified by the consequences of this choice in 30
>> years? How much thought did you put into this before deciding to
>> make it the eternal law of the human species? Are you so confident in
>> the power of your moral reasoning, oh modest Ben?
>
> The basic principle of democracy (which is based on choice, not
> extrapolated volition) has been around a long time, I don't claim to
> have invented it. Modern democracies work, pretty much, by letting
> people do what they want except when their wants collide, in which case
> a hodge-podge of moral principles and pragmatic considerations are used
> for conflict resolution. I don't always like the results but I find
> them sorta acceptable, preferable to what I see coming out of other
> governmental systems on Earth today or historically.
>
> Extrapolated volition based FAI is as much like fascism as it is like
> democracy --- fascism being a political philosophy in which the will of
> the individual is, in principle, subordinate to the will of the state
> which is understood to represent the greater good. In fact the will of
> the state, in fascism, is supposed to represent what people SHOULD want,
> and in some fascist theorists' writings, what the WOULD want if they
> were freed of the damaging illusions plaguing their minds.
>
> I am confident that the majority of Earthlings would vote for democracy
> over extrapolated-volition-based AI-control. But of course, this is a
> silly statements, since "voting for democracy" presumes that democracy
> already exists ;-) You could argue that, even if people would vote for
> democracy now, their future selves wouldn't want them to... etc. etc.

But do you, Ben, let alone the majority of modern-day Earthlings, have the
ability to correctly weigh the risks and benefits of direct democracy
controlling a superintelligence (EEEK!) versus collective volition? I
don't. I could be wrong, and the collective volition could turn over the
power to the UN General Assembly (EEEEEK!).

It's people's lack of moral caution or fear of screwing up, their
horrifying readiness to trust themselves with power, that causes me to turn
the choice of meta-dynamic over to a collective volition (which is a form
of humane transhuman intelligence, and one that structurally cares very
strongly about our expected first-person perspectives on the issue). I
don't trust myself. Nor do I trust other humans. From my studies of
cognitive psychology I know in too much detail how little grasp the human
mind has upon reality, how ready people are to make decisions on the basis
of what sounds like a good idea at the time and how unrelated that is to
their actual future experience and reactions. Just the prospect of humans
deciding on the basis of their half-a-paragraph *description* of the
outcomes, let alone their description of a human *prediction* of the
outcomes, makes me want to flee screaming into the night.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT