Re: FAI: Collective Volition

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Jun 01 2004 - 01:32:26 MDT


Thanks for writing this Eliezer. My first level comments follow.

How can we possibly get an SAI, excuse me, a "Friendly Really Powerful
Optimization Process". to successfully extrapolate the full collective
volition of humanity? At this point in the game we can't even master
simple DWIM applications. We do not have AIs that are capable of
understanding immediate volition much less the full extended volition.
   So how can any device/AI/optimization process claiming to do so
possibly seem other than completely arbitrary and un-Friendly?

Extrapolation of volition based on what we would want if we were very
different beings than we are is even more likely to go far off the
mark. How can this possibly not diverge wildly into whatever the FAI
(or whatever) forces to converge into simply what it believes would be
"best"? This is unlikely to bear a lot of resemblance to what any
actual humans want at any depth.

The task you describe for the FRPOP is the tasks myths would have a
fully enlightened, god-level, super-wise being attempt and only then
with a lot of cautions. IMHO, attempting to do this with a
un-sentient recursively self-improving process is the height of folly.
It seems even more hubristic and difficult than the creation of a
>human intelligent sentience. I don't see why you believe yourself
incapable of the latter but capable of the former.

Now you do back away from such implications somewhat by having volition
extrapolation be only a first level working solution until "that
voliton" evolves and/or builds something better. But what do you mean
by "that volition" here? Is it the FRPOP, what the FRPOP becomes, the
FRPOP plus humanity plus the extrapolated vision to date or what? It
isn't very clear to me.

If the ruleset is scrutable and able to be understood by some, many,
most humans then why couldn't humans come up with it?

I am not at all sure that our "collective volition" is superior to the
very best of our relatively individual volition. Saying it is
collective may make it sound more egalitarian, democratic and so on but
may not have much to do with it actually being best able to guarantee
human survival and well-being. It looks like you were getting toward
the same thing in your 2+ days partial recanting/adjustment-wishes. I
don't see how "referring the problem back to humanity" is all that
likely to solve the problem. It might however be the best that can be
done.

I think I see that you are attempting to extrapolate beyond the present
average state of humans and their self-knowledge/stated
wishes/prejudices and so on to what they really, really want in their
most whole-beyond-idealization core. I just find it unlikely to near
absurdity to believe any un-sentient optimizing process, no matter how
recursively self-improving, will ever arrive there.

Where are sections on enforcement of conditions that keep humanity from
destroying itself? What if the collective volition leads to
self-destruction or the destruction of other sentient beings? But
more importantly, what does the FAI protect us from and how is it
intended to do so?

Section 6 is very useful. You do not want to build a god but you want
to enshrine the "true" vox populi, vox Dei. It is interesting in that
the vox populi is the extrapolation of the volition of the people and
in that manner a reaching for the highest within human desires and
potentials. This is almost certainly the best that can be done by and
for humans including those building an FAI. But the question is
whether that is good enough to create that which to some (yet to be
specified extent) enforces or nudges powerfully toward that collective
volition. Is there another choice? Perhaps not.

A problem is whether the most desirable volition is part of the
collective volition or relatively rare. A rare individual or group of
individuals' vision may be a much better goal and may perhaps even be
what most humans eventually have as their volition when they get wise
enough, smart enough and so on. If so then collective volition is not
sufficient. Judgment of the best volition must be made to get the best
result. Especially if the collective volition at this time is adverse
to the best volition and if the enforcement of the collective volition,
no matter how gentle, might even preclude the better volition. Just
because being able to judge is hard and fraught with danger doesn't
mean it is necessarily better not to do so.

The earth is NOT scheduled to go to "vanish in a puff of smiley faces".
    I very much do not agree that that is the only alternative to FAI.

Q1's answer is no real answer. The answer today is we have no friggin'
idea how to do this.

Q2's answer is opaque (yes I did raise this question). I can see
using an SAI to tease out and augment the ability of humanity to see
and embrace its own best volition. But this doesn't seem to be what is
proposed. Or at least I need to ask various questions about what kind
of rules you plan the FAI to enforce and how to understand whether your
proposal is compatible.

I am not into blaming SIAI and find it amazing you would toss this in.

Q4's answer disturbs me. if there are "inalienable rights" it is not
because someone or other has the opinion that such rights exists. It
is because the nature of human beings is not utterly mutable and this
fixed nature leads to the conclusion that some things are required for
human well-functioning. These things that are required by the nature
of humans are "inalienable" in that they are not the made-up opinions
of anyone or some favor of some governmental body or other. As such
these true inalienable rights should grow straight out of Friendliness
towards humanity.

Your answer also mixes freely current opinions of the majority of human
kind and actual collective volition. It seems rather sloppy.

That's all for now. I am taking a break from computers for the next 4
days or so. So I'll catch up next weekend.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT