From: Chris Capel (pdf23ds@gmail.com)
Date: Sat Jul 23 2005 - 14:37:12 MDT
On 7/23/05, Russell Wallace <russell.wallace@gmail.com> wrote:
> The deadliest flaw in CV (actually it's far worse than merely deadly,
> as will be seen) is that it's still chasing the ghost of
> universe-dictated morality, that in simpler form was the source of
> Eliezer's and my first subjunctive planet kills.
What were those?
> It throws vast
> resources of intelligence - information-processing ability - behind
> moral axioms evolved for survival on the plains of Africa, and then -
> this is the problem - proceeds as though with unlimited power comes
> unlimited moral authority. In reality, a glut of intelligence/power
> combined with confinement - a high ratio of force to space - triggers
> the K-strategist elements of said axiom system, applying selective
> pressure in favor of memes corresponding to the moral concept of
> "evil". (Consider the trend in ratio of lawyers to engineers in the
> population over the last century for an infinitesimal foreshadowing.)
>
> In pursuit of a guiding light that isn't there,
"Guiding light" here being the overall convergence of volition? So
you're claiming that volition wouldn't really converge?
> the RPOP would
> extrapolate the interaction between K-strategist genes and parasite
> memes and force the result, with utter indifference to the
> consequences, on the entire human race. There will be no goal system
> for any element of the Collective but power - not clean power over the
> material world (which will effectively have ceased to exist except as
> the RPOP's substrate) but power always and only over others - a regime
> of the most absolute, perfectly distilled evil ever contemplated by
> the human mind. (Watch an old movie called "Zardoz" for a little more
> foreshadowing.)
I really don't understand this. What axiom system, and how is it
K-strategist? Grasping at your analogy, I guess that you mean to say
that human society has tended toward power-grasping, power
conglomeration, and that CV would extrapolate this tendency and
cause--who?--to be placed in absolute power over everyone else.
Doesn't this idea seem to go against the idea of CV giving each
sentience equal weight (or perhaps weight corresponding to their level
of awareness) in determining what the CV of humanity is? Or maybe you
mean to say that each individual human, when they gain knowledge and
awareness, tend to lust for power? But I think that most power-hungry
people are abberant in this sense. In any case, there's a fundamental
incoherence in trying to find the convergence in two people's desire
to hold power over each other. Because each's idea is that they
themself are the one in power, and so the intersection of those two
desires is very small. Then again, the effects of gravity on a huge
number of individual particles gives rise to planets and suns. Each
particle in a planet attracts each other particle, yet the effect
doesn't cancel out completely and lead to no gravity. In sum, it
causes large amounts of pressure to be exerted on the core of the
planet. Perhaps this is a good metaphor for the CV dystopia you have
in mind?
I'm less trying to criticize your post than to give you food for
thought, so you might perhaps be able to make clearer the exact nature
of the danger you see.
You mention parasitic memes. This gives me an idea about a criticism
of collective volition that may or may not be what you're saying.
There could be some dangerous and very powerful memes that haven't yet
evolved that could eventually do so and lead to some sort of dystopia,
and the CV process might recognize their eventual inevitability and
thus choose their effects as the direction to lead humanity.
Alternatively, in a non-singularity timeline of history, memetic
organisms could eventually evolve such complexity that sentience
actually begins to be more properly described as belonging to the meme
system than the humans themselves. From a human perspective, this
organism might resemble such severe dystopias as seen in fiction like
The Matrix. The human becomes so subservient to the memetic complex
that there is no sense of individuality at all, and perhaps not even
conscious experience. Humans might be merely cells in a multicellular
organism. Then again, it's unknown to what extent the state of global
society would presently yield to this interpretation, and thus what
kind of memetic organisms are currently in existence, and what their
complexity is, so speculation in this matter is particularly
difficult. But put abstractly, in an extrapolation of humanity's
future, a particular meme could be so powerful that it acts an
attractor in phase-space toward a particular dystopia. How does CV
treat this possibility?
A more coherent objection might follow these lines. The human mind, as
Eliezer himself has said, is an unstable thing. It's prone to get
itself into states that other humans call psychological disorders.
Things like sociopathy, narciccism, manic depression, anorexia,
schizophrenia. As "normal", "healthy" humans, we regard these states
as aberrant. But society here can have a huge effect on what we regard
as normal. When extrapolating an individual's volition had they more
intelligence, more interaction with society, their volition could vary
greatly in what new knoweldge one presumes the extrapolated individual
would have acquired, and what patterns of interactions they partake
of. Smarter individuals fit into society differently, and a society of
smarter people is a different kind of society. Not to mention the
effect of different systems of belief. One person who is introduced to
belief systems in one particular order will tend to accept certain
combinations of these belief systems and resist accepting new ones. A
scientologist is extremely hard to convince of the truth of anything
that conflicts with their belief system. So does the AI extrapolate
using the knowledge in a person's head, simply subjectively increasing
their IQ? Or does it introduce new knowledge to the person? How does
the AI present the knowledge in a fairminded, non-biased way? One
person's bias is another's dogma is another's obvious truth. Given
that a superintelligence could convince pretty much anyone of the
truth of anything, the prospect of educating people seems more to
resemble painting pictures on a blank canvas than it does conveying
information to an active listener. It seems that it's very difficult
for the AI to play a truly secondary role in this "extrapolation"
process. When extrapolating an individual's beliefs could lead in
directions ranging from complete irrationality and dogmatic
closmindedness to enlightened and balanced open-mindedness, depending
on what information the person is fed in what order, it becomes
obvious that we need a standard for communicating information
neutrally. I don't think it's possible to communicate information
neutrally, though. Any communication, without exception, is, in the
simplest view of it, the attempt to persuade the recepient of the
communication to adopt a certain attitude toward a certain object or
idea. This is inherently biased and sided.
So, let me say it again, hopefully more simply. The process of
extrapolating a human's desire if they were smarter implicitly
includes extrapolating the human's desire if certain events were to
take place affecting them in a way that we judge increases their
intelligence, or their maturity, or whatever. This simulation of the
human's developmental reaction to events is a necessary part of any
such extrapolation. The choice of which events are necessary to bring
about the desired developmental changes is a /hopelessly/ biased and
observer-centric choice, as are the developmental changes suggested. A
particularly relevant observation here is that there are certain
events that greatly increase a person's self-reported maturity and
enlightenment but in particularly malicious ways (e.g. being converted
to Scientology). There are many such different events that lead people
in entirely different directions, many of which lead in directions
most people would regard as real growth, and many of which lead to
what most people would regard as craziness. There is no way to avoid
the persistent programmer influence in this choice.
Chris Capel
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT