Re: FAI: Collective Volition

From: Eliezer Yudkowsky (
Date: Mon May 31 2004 - 13:11:43 MDT

Philip Sutton wrote:

> Hi Eliezer,
>>Oh, c'mon, respecting self-determination doesn't require an FAI to be
>>*that* much of a wimp. There isn't that much self-determination in
>>today's world to respect. The average character in a science-fiction
>>novel gets to make far more interesting choices than a minimum-wage
>>worker; you could transport the entire human species into an alternate
>>dimension based on a randomly selected anime and increase the total
>>amount of self-determination going on. Not that I am advocating this,
>>mind you; we can do better than that. I only say that if human
>>self-determination is desirable, we need some kind of massive
>>planetary intervention to increase it.
> What happens if most ordinary humans don't want some or all of their
> ground rules changed by a really friendly super AI dictator

A collective volition is not a really friendly super AI dictator. Please
read the material provided.

> according to its

I prefer to say "our".

> plan? What happens next?

Whatever our coherent medium-range volitions not vetoed by long-range
volitions said should happen. The thought occurs to me that complete
silence would not be out of question as a reply. We should invent our own
philosophies, not ask our extrapolated volitions, or someday we'll ask our
volitions and get back "The only answering procedure you know is asking
your volition." A wise volition would shut up; it's what we would want.
Hard to imagine anything sillier than arguing with your own extrapolated

I'd guess nine-tenths of the complaints would trail off within the first
week, if there wasn't anyone to argue with, and too much fun that urgently
needed having.

I decline all responsibility for this sort of thing. I'm guessing at the
outcome, guessing at the response, not making either choice for humanity.

And I donated $326.17 to SIAI in 2001, so I am allowed to speculate for a
year, four months, and four days.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT