From: Russell Wallace (firstname.lastname@example.org)
Date: Mon Jul 25 2005 - 13:06:02 MDT
On 7/25/05, Ben Goertzel <email@example.com> wrote:
> But Robin, isn't
> "the extrapolated vision of the collective"
> basically a rephrasing of
> "the will of the 'Collective Volition Extrapolation Machine' (as possibly
> filtered by a Last Judge)"
> I think that is what Russell meant by "the will of the Collective".
Sort of. We're talking about "the will of the '...Machine'", but being
a machine, it has no will of its own, except insofar as it is
constructed to have one. The original version of Collective Volition
suggested that it will _not_ have any will of its own, but just
implement the extrapolated volition of humans _under the assumption
that they know they are merely part of the Collective and there is no
escape_ - that's what'll turn it into Hell.
Now, in this last round of messages there's been additional talk about
things like (paraphrased):
The extrapolation algorithm could be any one of a billion choices,
each of which would have different results (hard to comment on that
without having any idea of the criteria by which the algorithm will be
The machine will have something of a will of its own in the form of
'superhumane morality' (which doesn't exist, but might end up meaning
a self-cover for programmer-specified morality, which might help, or
might mean the results of some badly understood algorithm, which might
at least result in a clean planet-kill).
CEV won't rule, it'll do nothing except run for 5 minutes, construct a
successor from scratch, and turn itself off (hopefully that'll result
in either a clean malfunction/shutdown or again at least a clean
> However, it is quite different than explicitly positing some particular
> value (like e.g. some better-specified version of 'a reasonably degree of
> freedom for sentients') as guidance for the post-Singularity uber-AI...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT