From: Eliezer S. Yudkowsky (email@example.com)
Date: Fri Jul 14 2006 - 11:57:22 MDT
Eric Baum wrote:
> These matters are discussed in more detail in "What is Thought?",
> particularly the later chapters.
You may assume I've read it.
> I enjoyed "Levels of Organization in General Intelligence". I very
> much agree that there must be depth and complexity in the
It should be emphasized that I wrote LOGI in 2002; many of my opinions
have updated since then, although I still think that LOGI stands as a
decent hypothesis about the evolutionary psychology of *human* general
One particular disagreement of Eliezer-2006 with Eliezer-2002 is that I
would no longer resort to using "complexity" as an adjective for any
process for which I do not know *specifically* how it is complex. As I
mention in a later essay, "A Technical Explanation of Technical
Explanation", the following statements are all roughly equivalent in how
much they shape our anticipated experiences:
"Human intelligence is an emergent product of neurons firing."
"Human intelligence is a magical product of neurons firing."
"Human intelligence is a product of neurons firing."
There are many synonyms for magic - words which seem to explain how
something interesting is done, without telling us what to anticipate,
without even being able to *exclude* experiences and tell us what we
should *not* see. "Phlogiston" is magic. "Elan vital" is magic. And
if you don't have a specific complex process in mind, saying that
"complexity" does something is magic. So I no longer go around saying
that intelligence is complicated, unless I have something specific in
mind which happens to be complicated. Anything I don't understand is
not "rich and deep and complex", it is something I don't understand.
In retrospect, saying that intelligence was rich and deep and complex
was the right mistake to make at the time, because it got me to go out
and study many subjects; read much more broadly than I would have done,
if I had made the complementary mistake of thinking that I had definite
knowledge to the effect that intelligence was simple. Nonetheless,
calling something "complex" doesn't explain it.
> There is one point, however, I wish to clarify.
> You state "The accelerating development of the hominid family and the
> exponential increase in human culture
Today I wouldn't call the increase in human culture "exponential",
though I'd call it "accelerating". As for hominids, I'm not sure that
the last seven million years can even be fairly characterized as
"accelerating". We went over some kind of critical threshold, but
there's no evidence that the trip up to the threshold was an
> are both instances of *weakly self-improving processes*,
> characterized by an externally constant process (evolution, modern
> human brains) acting on a complexity pool (hominid genes, cultural
> knowledge) whose elements interact synergetically. If we divide the
> process into an improver and a content base, then weakly
> self-improving processes are characterized by an external improving
> process with roughly constant characteristic intelligence, and a
> content base within which positive feedback takes place under the
> dynamics imposed by the external process." (477)... "A seed AI is a
> *strongly self improving process*, characterized by improvements to
> the content base that exert direct positive feedback on the
> intelligence of the underlying improving process." (478) [italics in
> original] and go on to suggest the possiblity that a seed AI may thus
> accelerate its progress in ways beyond what has happened to human
My words above, I would still defend. I'd phrase it slightly
differently, here and there - for example, "synergetically" is
synonymous with "magically". And I wouldn't say "a seed AI is a
strongly self-improving process", but, "I think it should be possible to
construct some classes of minds that are strongly self-improving".
Nonetheless, overall I still hold the above position.
> I would like to respectfully suggest the possibility that this
> overlooks a ramification of the layered and complex nature of
> intelligence. It seems that the very top level of an intelligent
> system (including a human) may be (or indeed to some extent may
> intriniscally have to be) a module or system that actually knows very
Knows very little, or does something very simple? Though I appreciate
that in your system, appropriate behavior is considered "knowledge".
Still, there are such things as complex processes that know very little,
and simple processes that know a great deal.
A giant lookup table is a simple process that may know an arbitrarily
large amount, depending on the incompressibility of the lookup table. A
human programmer turned loose on the purely abstract form of a simple
problem (e.g. stacking towers of blocks), who invents a purely abstract
algorithm (e.g. mergesort) without knowing anything about which specific
blocks need to be moved, is an example of a complex process that used
very little specific knowledge about that specific problem to come up
with a good general solution.
> An example would be the auctioneer in a Hayek system (which only
> knows to compare bids and choose the highest) or some other kind of
> test module that simply tries out alternative lower modules and
> receives a simple measure of what works and keeps what works, such as
> various proposals of universal algorithms etc. Such a top layer
> doesn't know anything about what it is comparing or how it is
> computed. Its a chunk of fixed code. One reason why it makes sense to
> assert there can't be some very smart top level is basically the
> same reason why Friedrich Hayek asserted you couldn't run a control
I think Hayek's assertion analogously argues that the "top level" - if
such a term even makes sense relative to a specific architecture - can't
be as smart as the entire AI, unless the "top level" is identical with
the entire AI, or the rest of the AI doesn't do any cognitive work.
However, the Hayek analogy doesn't necessarily argue that the top level
must be simple. In fact, the "uppermost level" of a large market
economy is so much more complex than any single human decisionmaker that
replacing the "uppermost level" with human strategizing results in
catastrophe, albeit for more reasons than simple cognitive incapacity.
One could just as easily argue that having an overly simple top level
was exactly the *mistake* of a planned economy.
Socialism doesn't work for human economies, but that doesn't show that
all good design looks like an emergent level of organization atop units
that were optimized for fitness in a strictly local context. There are
things that human societies can't get away with that would be perfectly
possible if we were designing the social agents from scratch. The cells
in a multicellular organism are more cooperative than ants in a colony,
which are more cooperative than a human society; owing to the levels on
which selection acted and the effectiveness of evolved regulatory
mechanisms against selfishness on lower levels of organization.
Is the term "top level" really all that useful for describing
evolutionary designs? The human brain has more than one center of
gravity. The limbic system, the ancient goal system at the center, is a
center of gravity; everything grew up around it. The prefrontal cortex,
home of reflection and the self-model, is a center of gravity. The
cerebellum, which learns the realtime skill of "thinking" and projects
massively to the cortex, is a center of gravity.
> But even if there would be some way to keep modifying the top level
> to make it better, one could presumably achieve just as powerful an
> ultimate intelligence by keeping it fixed and adding more powerful
> lower levels (or maybe better yet, middle levels) or more or better
> chunks and modules within a middle or lower level.
The notion of a "top level", to me, suggests that there's some behavior
such that it's useful to implement it over everything done by other
modules. In the human mind, three such behaviors are reinforcement,
reflection, and realtime control. A "top level" behavior can be simple
or complex, and it can embody a lot or a little knowledge.
> Along these lines, I tend to think that creatures evolved
> intelligence and "consciousness" in this fashion: a decision making
> unit that didn't know much but picked the best alternative ("best"
> according to simple pain/reward signals passed to it) evolved first
> (already in bacteria), followed by evolution in the sophistication of
> the information calculated below the top level decision unit. No
> doubt there was some evolution in "the top level" to better interface
> with the better information being passed up, but this was not
> necessarily the crux of the matter. So in some sense, "wanting" and
> "will" may have came first evolutionarily, and consciousness simply
> became more sophisticated and nuanced as evolution progressed. This
> also seems different than your picture.
I did later have the thought that organisms needed to start out with
complete circuits - simple neural thermostats that implemented extremely
primitive modality structure and category structure and event structure
all at once. A better evolutionary theory would state how
organizational structure differentiated out from there. Obviously, a
system that is too incomplete to guide the organism cannot exist as an
evolutionary intermediate. I am not sure that this is equivalent to
what you are thinking, but as an amendment, it seems to have some of the
It is certainly an intriguing suggestion that, like ATP synthase, the
fast and frugal heuristic "Take the Best" is older than eukaryotic life
and was passed down to humans without substantial modification along the
way. But I'm not sure I believe it.
From my perspective, this argument over "top levels" doesn't have much
to do with the question of recursive self-improvement! It's the agent's
entire intelligence that may be turned to improving itself. Whether the
greatest amount of heavy lifting happens at a "top level", or lower
levels, or systems that don't modularize into levels of organization;
and whether the work done improves upon the AI's top layers or lower
layers; doesn't seem to me to impinge much upon the general thrust of I.
J. Good's "intelligence explosion" concept. "The AI improves itself."
Why does this stop being an interesting idea if you further specify that
the AI is structured into levels of organization with a simple level
describable as "top"?
> I further think that a sufficient explanation (which is also the
> simplest explanation, and is in accord with various data including
> all known to me, and is thus my working assumption) for the
> divergence between human and ape intelligence is that the discovery
> of language allowed greatly increased "culture", ie allowed
> thought-programs to be passed down from one human to another and thus
> to be discovered and improved by a cumulative process, involving the
> efforts of numerous humans.
If chimps had language, would they achieve human levels of technological
sophistication? This is not a rhetorical question. A human raised in
isolation, a wolfling child, may not be much more than a chimp. So is
most of the difference our software?
A human raised in total darkness would end up blind. That doesn't mean
that the software of the visual cortex is stored primarily in the
environment and in the laws of optics - that you can apply a simple
decryptor to the photons as they strike, and end up with the code of a
visual cortex. It means that the software of the visual cortex evolved
to treat the visual environment as an invariant and does not properly
develop in the absence of that invariant. Tooby and Cosmides wrote
about this at length in "The Psychological Foundations of Culture".
Genes can store information in the environment, but it's still the genes
deciding what to store and how to retrieve it.
> I think the hard problem about achieving intelligence is crafting the
> software, which problem is "hard" in a technical sense of being
> NP-hard and requiring major computational effort,
As I objected at the AGI conference, if intelligence were hard in the
sense of being NP-hard, a mere 10^44 nodes searched would be nowhere
near enough to solve an environment as complex as the world, nor find a
solution anywhere near as large as the human brain.
*Optimal* intelligence is NP-hard and probably Turing-incomputable.
This we all know.
But if intelligence had been a problem in which *any* solution
whatsoever were NP-hard, it would imply a world in which all organisms
up to the first humans would have had zero intelligence, and then, by
sheer luck, evolution would have hit on the optimal solution of human
intelligence. What makes NP-hard problems difficult is that you can't
gather information about a rare solution by examining the many common
attempts that failed.
Finding successively better approximations to intelligence is clearly
not an NP-hard problem, or we would look over our evolutionary history
and find exponentially more evolutionary generations separating linear
increments of intelligence. Hominid history may or may not have been
"accelerating", but it certainly wasn't logarithmic!
If you are really using NP-hard in the technical sense, and not just a
colloquial way of saying "bloody hard", then I would have to say I
flatly disagree: Over the domain where hominid evolution searched, it
was not an NP-hard problem to find improved approximations to
intelligence by local search from previous solutions.
Now as Justin Corwin pointed out to me, this does not mean that
intelligence is not *ultimately* NP-hard. Evolution could have been
searching at the bottom of the design space, coming up with initial
solutions so inefficient that there were plenty of big wins. From a
pragmatic standpoint, this still implies I. J. Good's intelligence
explosion in practice; the first AI to search effectively enough to run
up against NP-hard problems in making further improvements, will make an
enormous leap relative to evolved intelligence before running out of steam.
> so the ability to make sequential small improvements, and bring to
> bear the computation of millions or billions of (sophisticated,
> powerful) brains, led to major improvements.
This is precisely the behavior that does *not* characterize NP-hard
problems. Improvements on NP-hard problems don't add up; when you tweak
a local subproblem it breaks something else.
> I suggest these improvements are not merely "external", but
> fundamentally affect thought itself. For example, one of the
> distinctions between human and ape cognition is said to be that we
> have "theory of mind" whereas they don't (or do much more weakly).
> But I suggest that "theory of mind" must already be a fairly complex
> program, built out of many sub-units, and that we have built
> additional components and capabilities on what came evolutionarily
> before by virtue of thinking about the problem and passing on partial
> progress, for example in the mode of bed-time stories and fiction.
> Both for language itself and things like theory of mind, one can
> imagine some evolutionary improvements in ability to use it through
> the Baldwin effect, but the main point here seems to be the use of
> external storage in "culture" in developing the algorithms and
> passing them on. Other examples of modules that directly effect
> thinking prowess would be the axiomatic method, and recursion, which
> are specific human discoveries of modes of thinking, that are passed
> on using language and improve "intelligence" in a core way.
Considering the infinitesimal amount of information that evolution can
store in the genome per generation, on the order of one bit, it's
certainly plausible that a lot of our software is cultural. This
proposition, if true to a sufficiently extreme degree, strongly impacts
my AI ethics because it means we can't read ethics off of generic human
brainware. But it has very little to do with my AGI theory as such.
Programs are programs.
But try to teach the human operating system to a chimp, and you realize
that firmware counts for *a lot*. Kanzi seems to have picked up some
interesting parts of the human operating system - but Kanzi won't be
entering college anytime soon.
The instructions that human beings communicate to one another are
instructions for pulling sequences of levers on an enormously complex
system, the brain, which we never built. If the machine is not there,
the levers have nothing to activate. When the first explorers of AI
tried to write down their accessible knowledge of "How to be a
scientist" as code, they failed to create a scientist. They could not
introspect on, and did not see, the vast machine their levers
controlled. If you tell a human, "Try to falsify your theories, rather
than trying to prove them," they may learn something important about how
to think. If you inscribe the same words on a rock, nothing happens.
Don't get me wrong - those lever-pulling sequences are important. But
the true power, I think, lies in the firmware. Could a human culture
with a sufficiently different "operating system" be more alien to us
than bonobos? More alien than a species that evolved on another planet?
If most of the critical complexity is in the OS, then you'd expect this
to be the case. Maybe I just lack imagination, but I have difficulty
> Another ramification of this layered picture are all the ways that
> evolution evolves to evolve better, including finding meaningful
> chunks that can then be put together into programs in novel ways.
> These are analogous to adding or improving lower layers on an
> intelligent system, which may make it as intelligent as modifying the
> top layers would in any conceivable way. Evolution, which
> constructed our ability to "rationally design", may apply very much
> the same processes on itself.
> I don't understand any real distinction between "weakly self
> improving processes" and "strongly self improving processes", and
> hence, if there is such a distinction, I would be happy for
The "cheap shot" reply is: Try thinking your neurons into running at
200MHz instead of 200Hz. Try thinking your neurons into performing
noiseless arithmetic operations. Try thinking your mind onto a hundred
times as much brain, the way you get a hard drive a hundred times as
large every 10 years or so.
Now that's just hardware, of course. But evolution, the same designer,
wrote the hardware and the firmware. Why shouldn't there be equally
huge improvements waiting in firmware? We understand human hardware
better than human firmware, so we can clearly see how restricted we are
by not being able to modify the hardware level. Being unable to reach
down to firmware may be less visibly annoying, but it's a good bet that
the design idiom is just as powerful.
"The further down you reach, the more power." This is the idiom of
strong self-improvement and I think the hardware reply is a valid
illustration of this. It seems so simple that it sounds like a cheap
shot, but I think it's a valid cheap shot. We were born onto badly
designed processors and we can't fix that by pulling on the few levers
exposed by our introspective API. The firmware is probably even more
important; it's just harder to explain.
And merely the potential hardware improvements still imply I. J. Good's
intelligence explosion. So is there a practical difference?
> Eric Baum
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Mon Jun 17 2013 - 04:01:00 MDT