Re: Volitional Morality and Action Judgement

From: Keith Henson (
Date: Mon May 24 2004 - 16:55:16 MDT

At 08:39 PM 23/05/04 -0700, Jef wrote:


>Those who attend to transhumanist discussion lists are conversant with
>ideas of radical morphological change, super-intelligences, and the whole
>spectrum of possible worlds explored almost exclusively within the Sci-Fi
>literature and discussion groups such as this one. We tend to be young,
>smart, and therefore arrogant to an extent that is appreciated only with
>increasing life experience. Although we can't know what we don't know, we
>are smart enough to recognize this heuristic and use it to temper some of
>our speculation in the domain of the near-term here and now.
>What I think is under-represented in these groups is an effective level of
>awareness and appreciation of human nature on the broader scale.

Correct. There isn't even a name for this study, though applied
evolutionary psychology might come close.

>Humans are diverse, creative, and driven to improve their conditions.
>Humans, within their cultural matrix, comprise the most intelligent system
>we know of. This intelligent system even has built-in human values at its

True, but there is a lot else there, baggage left over . . . .

>Unfortunately, this intelligent system today suffers from a scaling
>problem; the processes that worked well enough at the level of the tribe
>and village are not effective at the world global scale and humans are at
>risk of failing due to a lack of global scale "management" (perception,
>control and feedback) of their vital processes.

As bad or worse, they don't understand the emergent origin of war. A
depressing but hard to avoid model is that the human line has been pushing
the limits of its ecosystem ever since we became too dangerous for the big
cats to eat. Population growth or climate upset periodically sets up the
conditions for starvation. If it was not possible to move, then war with a
neighbor was better (win or lose) for your genes than starving. I make the
case that "looming privation" turns up the "gain" on xenophobic memes in a
population. In the tribal environment, that synched the warriors to attack
a neighboring group. In the modern world we should strive to keep this
trait switched off.


>At this stage, to minimize pain and suffering as humanity moves to the
>next plateau, we don't need (yet) radical intelligence optimization of
>humans or AIs. What we need is the systems-level improvements that will
>allow us to get to the next level. *Further globalization of the economy
>is key to creating an interdependent planetary civilization where
>large-scale destruction of "the Other" will become meaningless. *Global
>communication, knowledge sharing, and adoption of new concepts of privacy
>and transparency will match and help us handle the threat of terrorist
>acts, and lay the groundwork for the next generation of humanity to grow
>up with a global perspective.

May I cite the dismal history of Easter Island?

The Easter Islanders used up the environment (particularly the trees with
which they could make boats) much like we are using up oil. When they no
longer had boats, they lost the ability to harvest the sea for fish,
resulting in "looming privation." Because they were all one narrow genetic
stock, they eventually split into warring groups over trivial matters and
went at each other with rocks till perhaps 95% of the population was
gone. At that point the ecosystem was able to recover somewhat and with
recovering prosperity, the psychological mechanisms inducing war turned off.

There is nothing wrong with large populations. The solar system could
eventually support a thousand times the current population in steady or
growing income per capita for a long time. To get there without wars
though requires population growth slower than economic growth. "Looming
privation" especially after a long run of good times is something we can't
afford if it leads to wars--and unfortunately that's the situation in a lot
of the world.

If this model holds up under further study and is widely understood, I hope
someone comes up with a way to convince political powers. The current US
administration certainly would not accept a model based--as this one is--on
evolution. Nor are they likely to accept that all out birth control
(probably including abortion) is the way to reduce social pressures for war
and war-like terror.

>*Global thinking will progress to the kind of larger-scope rationality
>that makes the prisoner's dilemma and most zero-sum game problems trivial.
>*Intelligence augmentation will progress beyond our currently primitive
>mind-amplification tools and provide us with the necessary capability for
>information management and collaborative decision making during the next
>stage, and help us create the future.
>Recently on this list, human examples of enlightened altruism were
>presented. Sidhartha Gautama, Mahatma Gandhi, Martin Luther King Jr.,
>(perhaps the Dalai Lama should be also included here -- all are unenhanced
>humans, biologically similar to the rest of humanity, serving as models of
>what is already within our reach, given the necessary and conducive conditions.
>SIAI and similar efforts play a valuable role in promoting discussion and
>raising awareness of our accelerating times. AI will play a very large
>role in our future, but many on this list may be surprised what will be
>done without requiring machine sentience and recursive self
>modification. I believe these too will come, but we have to survive the
>global maturing of humanity before then.

I was at a psychology workshop this last weekend. There were people there
who are making remarkable progress on both understanding where we came from
and (using functional MRI) how brains are wired up.

They are the kind of people who understand the "Mamma bear" problem that
William Calvin discusses in the first chapter of The Ascent of Mind: Ice
Age Climates and the Evolution of Intelligence.

"Unfortunately, a little arithmetic shows that this story doesn't have a
happy ending. How many bears can the environment feed? Obviously, that's
the average bear population. And that means, on average, only two babies
per mother get to grow up and become a parent, out of the dozen or two that
she produces. The maximum population level is not set by the birth rate but
by the number of job slots afforded by the environmental niche occupied by
bears. . . ."

"That means the average Mama Bear is raising five-to-ten times more baby
bears than can possibly survive, absent, of course, miracles -- . . . ."

The opposition I got to asking how far back it was that humans became their
own major predator was really interesting (at a meta level). It is obvious
that hominid lines were able to fill the environment to capacity and then
some. The simple fact that our line spread so far into Asia so long ago
indicates population pressure on groups at the population edges. Behind
that edge *something* (or things) held the population to the numbers the
environment can feed. Obviously diseases were a factor, but probably not

But the idea that human line groups killed off neighbors as a regular
feature of life long enough to select for a survival response genes when
things start to get tight . . . . . I wonder if humans generally have
emotional censors to reject such an idea. Perhaps such thoughts conflict
in with our view of ourselves--at least as that view has been shaped by
times of relative plenty.

"Global maturing" may be a lot harder if censor biases keep us from
thinking about this class of knowledge.

Then again, perhaps humans differ in some way from bears and they didn't
overfill the environment during the stone age or they had some other way to
control populations that didn't involve groups killing each other.

This might be a bit off from the main topic of this news group. It is,
however, interesting to look at not only the origin of wars but at thinking
biases about ourselves. Such biases could also be at work on our thinking
about AIs.

Keith Henson

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT