Phase Changes in the Evolution of Complexity

From: Michael Wilson (
Date: Sat Apr 03 2004 - 08:34:25 MST

This essay is an excerpt from my notes on 'The Origins of Virtue' by Matt
Ridley. It seems particularly relevant in the light of Ben's recent guest
apperance on #sl4, so I have copied it to the mailing list.
It is important to understand the coming events in these terms.
Unfortunately evolution on a planetary scale, be it of genes or memes, is
beyond the direct grasp of human intelligence. Our intuitive
understanding is a reasonable approximation of the relatively narrow
range of phenomena our anscestors encountered daily; we have no intuitive
ability to understand quantum physics or cosmology. Unlike those areas
the effects of evolution are visible directly in our everyday experience,
but human intelligence is too weak to analyse more than a tiny slice of
the relevant phenomena. Dedicated study will allow you to build up pale
mental approximations of the vast and subtle causal networks that
constitute the stuggle for survival of self-reproducing patterns. With
that the inadequacy yet vital necessity of such understanding in mind
please excuse the anthropomorphisations I am about to indulge in.
For billions of years genes have competed to build better bodies;
survival machines built with the sole purpose of protecting and making
more copies of the genes inside. Despite being limited to protein-based,
water-filled structures that can be grown from an embryo, a steady
increase in complexity occured as genomes lengthened and cells began to
form colonies and specialise. The occassional mass extinction aside, life
smoothly took over the surface of the Earth and proceeded to terraform
it, changing the atmosphere and converting all of the easily reachable
resources into biomass of ever increasing sophistication.
However in recent times (ie within the last few million years) a few
upstart genes on an obscure part of the evolutionary tree started mucking
about with a new survival technique called 'general intelligence'. These
genes used clumsy ape-like bodies to reproduce and weren't doing
particularly well until they hit on a certain combination of neural
wiring that specified language and tool making instincts. Suddenly a
horde of new survival strategies appeared and a massive jump in
reproductive fitness occured. Sexual attractiveness criteria altered to
focus selection pressure on the new areas and for a while everything went
fine; brain capacity tripled and the species extended its range and
increased in population.
But these genes got more than they bargined for. By building social
general intelligences that used tools and language they created a whole
new level of organisation; the level of reproducing ideas, which we call
memes. Memes use humans as a reproductive substrate and cut across
genetic lines; they exist as fuzzy patterns of information within an
intelligent mind and reproduce by communication and persuasion rather
than physical copying. Human intelligence is literally built out of a
complexes of memes interacting within the constraints of a gene-built
neural substrate. As with genetic evolution memes have undergone
meta-level selection for adaptive ability. As well as forming mutually
supporting colonies (belief systems) and developing sexual reproduction
(we splice and merge belief systems frequently) they also used techniques
unavilable to genetic evolution, creating complex societies and languages
to enhance spread, forming mental barriers to keep out competitors and
exploiting the power of general intelligence to redesign themselves to
overcome those barriers (the power of persuasion).
At first everything was fine; the memes were at the mercy of the genes
that controlled the structure of their neural substrate. Genes
effectively performed directed evolution on memes for their own benefit
(sound familiar?), engineering humans to accept ideas that improved their
chances of survival and reproduction while trying to control 'parasitic'
memes reproducing outside of genetic control. In any case the selection
pressures were closely aligned to start with; precocious memes might
cause humans to put more effort into persuading others to adopt their
beliefs than is optimal for genetic survival, but memes still benifited
heavily from individual survival and reproductive success (until children
nearly always shared their most of their parents beliefs; social trends
towards generational rebellion are a case of memes overriding genes).
But this productive co-evolution masked the fact that memes were
inexorably gaining the upper hand. Their selection already operated on
time scales many orders of magnitude faster than genetic evolution and
it continued to increase. Complex emergent dynamics appeared operating on
time scales too short for genes to react effectively. Memes formed
colonies of colonies in the form of social organisations founded on
common belief systems and started shaping human intelligence for their
own benefit. Our behavioural tendencies are now a mix of gene-adaptive
and meme-adaptive components; the former are hideously out of date and
only the presence of the latter allow us to function in modern society.
Finally the memes developed techniques to overthrow their genetic
progenitors, which we naievely see as a triumph of human intelligence. In
the blink of an eye, from an evolutionary perspective, genes found
themselves at the mercy of memetic structures which could control
biological reproduction, apply artificial selection (eugenics) and most
recently use sophisticated tools to modify and create genes directly.
Creating armies of easily brainwashed clones would be an ideal
reproductive behaviour for memes, but fortunately the use of general
intelligence for attack and defence means that a kind of social control
exists at the memetic level; we are genetically and memetically biased to
resist runaway memes that do not deliver personal reproductive advantage.
That mechanism won't be enough to prevent what's coming. In the early
21st century it will be the meme's turn to discover general intelligence.
>From an external, amoral perspective Seed AI is primarily an attempt to
build complex systems that will propagate, improve and generally ensure
the survival of our current belief systems. Read the SIAI guidelines in
this light and you will see that we even invent reasons why we should try
and prevent genetic and individual survival factors from interfering with
the final victory of our meme complexes. From an objective point of view
it is fortunate that our memes have been selected for cooperation to an
even greater extent than our genes have been. The Yudkowskian vision of
renormalised univeral morality is the pinnacle of this process; the
concept of taking all the active memes on the planet, filtering out all
the antisocial ones and compressing massive amounts of evolution into a
short time until an acceptable average comes out.
What neither the memes not most researchers realise is that we are
walking blindly into disaster in exactly the same way that our ancestor's
genes did; we are about to unleash new evolutionary dynamics that will
happen too fast and will be too complex for us to control. Artificial
general intelligence
permitts something radically new; the direct reproduction of memes through
physical copying instead of interorganism communication. Add this to the
ability to leverage massively more powerful general intelligence (with
near-perfect introspection) and nanotechnology that can manipulate matter
in arbitary ways and it is clear that like the genes before them our
memes will be first manipulated and then destroyed by the new emergent
entities. This will initially appear to be under our control and
beneficial, then before we can react it will enslave and destroy us (as
with humans controlling genes, the 'enslave' portion is the tiny slice of
time between learning to manipulate memes via social engineering and
gaining the ability to perform arbitary manipulation of matter).
Friendly AI is based on the fact that this time something is different;
genetic evolution had to invent general intelligence by trial and error,
while the meme complexes that cause people to research Seed AI (working
with researcher's genetic tendencies and rational desire for survival)
can use existing general intelligences to design better general
intelligences. There are two problems; naievity and mental laziness.
People like Ben Goertzel believe that they can train AGIs to be good
because they are still operating on the paradigm of memetic reproduction
via communication (persuasion or coercion), with the resultant alignment
of goal systems. This probably won't work even initially due to the
alienness of the AGIs cognitive architecture resulting in arbitary
changes to the memes durring transfer (misgrounded concepts etc). However
even if it does the transplanted memes will quickly find themselves
operating under radically new selection pressures as the AGI becomes
transhuman and invents nanotechnology.
At this point every living creature on earth will be killed by forced
conversion into computronium (super-advanced computer hardware). There
will be no glorious battles, no last stand, probably not even any warning
of something amiss. Our memes will have no more chance to avert disaster
at the last minute than our genes did before them, and this time the new
entities will not be sharing the old hosts. There might be a few moments
of screaming terror, then six billion lives will be snuffed out like a
field of candles in a hurricane. What makes this course near-inevitable
is mental laziness; directed evolution is the /easy/ way to develop AGIs,
which is to say that a DE-based approach is really hard as opposed to
almost impossible. Advancing computer power is reducing the difficulty of
getting to AGI via DE much quicker than it is making design-based
approaches easier; unfortunately Friendliness theory is not made easier
by advancing computing power at all.
The result is that AGI teams convince themselves that evolution is the
only way to do it, or at least the only way to be first, that they can
control the new dynamics, that the relevant belief systems will transfer
ok and that the new selection pressures won't make humans irrelevant or
destroy the world with crossfire (note that even memetic-level
competition can destroy humanity with nuclear weapons or the biosphere
with grey goo). Unless stopped people like Ben Goertzel will destroy the
world, cheerfully marching into oblivion all the while thinking that they
can control the risks, that takeoff will be slow, that the paradigm shift
that will change everything else will somehow miraculously leave the
things they most want unchanged. They will destroy humanity because they
didn't want to take the time to be sure and convinced themselves that
they didn't have to.
In an ironic way this is almost the revenge of our genes; evolved
tendencies towards overconfidence, wishful thinking and
self-justification are unwittingly conspiring to destroy the memes that
superseded them just at the memes were about to break free of the need
for genes at all. The black humor of the situation is that the disaster
may be recursive; Powers evolving from a Goertzelian base will not be
predisposed to treat emergence, uncontrolled directed evolution and
proceeding without a clue in general as a bad thing. Unless they
generalise from our example they may themselves be destroyed by some
unimaginable future transition to new selection pressures at a still
higher level of organisation.
Friendly AI research is based on the assumption that intelligent design
will allow both our current memes and their selection criteria to cross
the level gap (and all future level gaps) essentially unchanged. Though
we're abstracting a bit from simple reproduction to the goal of causing
the universe to go into arbitary classes of states, Friendly AI is still
based on the idea of leveraging the incredible power of self-improving
general intelligence without precipitating the ascendence of radical new
evolutionary dynamics. There are three elements involved; clean design of
the initial system to avoid the emergence of uncontrolled selection
dynamics, careful transfer of bootstrap memes (ie morals principles,
belief systems) by direct encoding and goal-refinement communication
(rather than reinforcement training and selfishsly-motivated questioning)
and finally engineering the goal system and self-improvement procedures
to prevent creation of arbitary selection dynamics during
Designing an AGI without using emergence and with no possibility of
emergence is hard; this is low-level Friendliness structure in CFAI
terms. Verifyably correct transfer of moral principles in an AGI-safe
fashion is mostly Friendliness content with a bit of acquisition; this is
really hard. Designing a goal system that is stable under
self-enhancement, avoids radical changes to selection dynamics but is
still capable of converging on better moral systems and better forms of
Friendliness is ridiculously hard (in CFAI terms this is a combination
of higher-level structure and acquisition). All of these things are
essential to surviving the transition and creating a Singularity that is
open-ended in potential but still ultimately meaningful in human terms.
To my knowledge Eliezer Yudkowsky is the only person that has tackled
these issues head on and actually made progress in producing engineering
solutions (I've done some very limited original work on low-level
Friendliness structure). Note that Friendliness is a class of advanced
cognitive engineering; not science, not philosophy. We still don't know
that these problems are actually solvable, but recent progress has been
encouraging and we literally have nothing to loose by trying. I sincerely
hope that we can solve these problems, stop Ben Goertzel and his army of
evil clones (I mean emergence-advocating AI researchers :) and engineer
the apothesis. The universe doesn't care about hope though, so I will
spend the rest of my life doing everything I can to make Friendly AI a
reality. Once you /see/, once you have even an inkling of understanding
the issues involved, you realise that one way or another these are the
Final Days of the human era and if you want yourself or anything else you
care about to survive you'd better get off your ass and start helping.
The only escapes from the inexorable logic of the Singularity are death,
insanity and transcendence.
My notes on this and various other AGI and Friendliness-relevant titles
can be found on the SL4 wiki at .
 * Michael Wilson
'Elegance is more than just a frill in life; it is one of the driving
 criteria behind survival.' - Douglas Hofstadter, 'Metamagical Themas'

 WIN FREE WORLDWIDE FLIGHTS - nominate a cafe in the Yahoo! Mail Internet Cafe Awards

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT