From: Ben Goertzel (email@example.com)
Date: Wed Aug 31 2005 - 06:33:04 MDT
I have been thinking a little more about "extrapolated collective volition"
(http://www.intelligence.org/friendly/collective-volition.html ), and have had
what seems to me a moderately interesting train of thought in this
[Of course, as I've noted before, I consider all ideas in this direction
highly science-fictional and not that likely to have practical import for
the future of the universe. I suspect that by the time it becomes possible
to build a Collective Volition Extrapolator machine, the human race will
have already either ceded power over the universe to its constructed
descendants, or will have realized that it never had this power in the first
This email assumes the reader has familiarity with the above-referenced
document written by Eliezer Yudkowsky.
First I will discuss some properties of collective volition extrapolation,
and then I will propose a modification to the collective volition
Now for the "discussion of properties" part....
Suppose it is true that there are N attractors that superhumanly intelligent
minds will tend to fall into: A_1, ..., A_N (where N is much smaller than
the number of humans). [Let's call this Hypothesis A]
Next, suppose that the trajectory of each individual human under conditions
of "increasing knowledge and self-knowledge" depends fairly sensitively on
the environment of that human -- so that, depending on the environment, each
human mind may wind up in more than one of the N attractors. [Let's call
this Hypothesis B]
Now, in this case, for the outcome of the "collective volition
extrapolation" process, do you want to choose the attractor (out of the N)
that would occur for the most humans in an average over all environments
I don't feel comfortable at all that this is a good way to determine the
initial state for the next phase of development of the universe.
Of course, I don't know that Hypotheses A and B are true. But you don't
know that they're false either.
Suppose the volition extrapolator determines that they ARE true. Then, as
Last Judge, would you decide to call off the Collective Volition approach to
guiding the future of the universe?
Or would you prefer to, exerting some Last-Judgely universe-engineering
power, decide to build a machine that could partition the universe into N
partitions P_1,..., P_N , with P_i getting an amount of resources
proportional to the probability that a random mind in a random environment
(drawn from appropriate distributions) winds up in A_i.
The problem with this Last-Judgely intervention, of course, is that it may
not be what ANY of the final attractors A_i want. Or it may be what *your*
final attractor A_i wants, but not what any of the others want (and yours
may have a 3.2% probability weight...).
My overall conclusion, at this point, is that
Yes, it would be really valuable and really cool to build and run a
collective-volition extrapolator, along with an individual-volition
extrapolator, of course
However, I am not going to just accept the results of such an extrapolator
as good, and nor am I going to accept it as "good if subjected to the final
yea-or-nay-power of a Last Judge." I'm afraid things are subtler than
This leads me to the second part of the e-mail, in which I outline a
somewhat amusing modification to the collective volition extrapolation
Consider this series:
S_0 = human race
S_1 = human race, after collectively studying the results of the First
Collective Volition Extrapolator (which extrapolates the volition of S_0)
S_2 = human race, after collectively studying the results of the Second
Collective Volition Extrapolator (which extrapolates the volition of S_1)
... etc. ...
Note how this differs from simple Collective Volition extrapolation?
Partly, it's because the probability distribution over future environments
is being constrained: we're looking only at futures in which the human race
builds a series of volition extrapolators and studies their results. This
may be a small percentage of all possible futures.
Of course, if the First Collective Volition Extrapolator is *correct*, then
the results of all the later Volition Extrapolators should simply agree with
it. But most likely this CEV_1 machine will involve a lot of
approximations, so that the series will not be completely repetitive....
One can then hypothesize that, once CEV_n and CEV_(n+1) substantially agree
for a few iterations, one has a somewhat trustable approximation of CEV [the
Humean problem of induction aside.. ;-) ]
What I like about this kind of approach (let's call it "Iterated Collective
Volition Extrapolation") is that it involves the current human race
explicitly in the decision of its own fate, rather than placing the power in
the hands of some computational process that's suppose to extrapolate the
volition of the human race.
I think that making accurate volition extrapolators is far more difficult
than making superhuman AI's, so I really doubt that this kind of speculation
is going to have any relevance to the future of the human race in the
pre-superhuman-AI period. On the other hand, it may be that superhuman AI's
will decide this is an interesting way to determine THEIR future.
This gives rise to the somewhat obvious idea of creating a superhuman,
sentient AI whose short-term goal is to make itself smart enough to build a
Collective Volition Extrapolator, and then apply it, starting off the series
above, but with a variation...
T_0 = human race plus AI's
T_1 = human race plus AI's, after collectively studying the results of the
First Collective Volition Extrapolator (which extrapolates the volition of
T_2 = human race plus AI's, after collectively studying the results of the
Second Collective Volition Extrapolator (which extrapolates the volition of
... etc. ...
Of course, there are the familiar dangers involved in creating a superhuman
AI oriented toward a particular goal --- how do we know that once it gets
smarter than us, it won't feel like doing something besides building a
Collective Volition Extrapolator? However, at least building a CEV is a
concrete and easy-to-specify goal, with a brief time-horizon associated to
A final caveat: In case I haven't already made it clear, let me re-emphasize
that I do NOT consider the ideas in this email to be any kind of solution to
the problem of FAI, nor to be particularly important for the future of the
universe. I present them here primarly for intellectual stimulation and
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT