Re: Retrenchment

From: Eliezer S. Yudkowsky (
Date: Thu Aug 18 2005 - 17:14:25 MDT

Shades of the laundry list in

Richard Loosemore wrote:
> ************************************************************
> Now, some of these communities are more directly hands-on, while some
> just watch and comment and contribute from afar, but the six different
> language that they speak and the six different paradigms they bring to
> the table are all in some way relevant to the task of understanding how
> cognitive systems might work, and how we might go about building an AI.
> But the problem is that you can go into one of these communities and
> find very talented people who are completely ignorant of what is going
> on in the others. Often, it is not just ignorance but actual scorn and
> disdain, as if they are proud not to know what is happening elsewhere,
> because they regard the ideas (and sometimes the people) in some of
> those other communities as irrelevant or stupid. Frequently, people
> have a smattering of some other field and think that they therefore know
> it all.

Some roads go on, if not forever, then a long long way. It's a funny thing,
you know; I was just about to ask you if you belonged to some communities, but
apart from neuroscience they weren't on your list. Incidentally, a decent
neuroscience-user ought to know many special cases of human brain damage, and
you did not say of that community that they would recognize the name of
Phineas Gage. I mention this because studying human brain-damage cases can
also help defeat anthropomorphism.

I recall from memory - though not, I fear, complete with a full citation - an
experiment done in social psychology. A person X asked another person Y a set
of far-ranging questions, which could be drawn from any knowledge X had
managed to accumulate, in front of another person Z. Naturally, Y didn't know
the answers to nearly all of the questions. X and Y were randomly assigned to
their roles, and there are *very* few people who know so much that they can
readily answer questions from *any* other person's specialty. The interesting
part was when they asked the participants whether X or Y looked more
intelligent. X usually said that he did not think he had looked particularly
intelligent; Y said that X looked somewhat more intelligent; Z said that X
looked very much more intelligent. The moral is that you can look very
intelligent by asking people hard-to-answer questions.

I expect you were not aware of this phenomenon, nor deliberately trying to
exploit this known human bias. But as for me, I recognized more than half of
the obscure names you said, and less than all, and I know how good a showing
that *really* is - after discounting the effects of the standard human bias
which this situation happens to match.

Having said that, then by all means, if it comes to showing off, two can play
at *that* game.

I have never encountered someone who might qualify as a member of all the
communities I think to be necessary. Possibly Eric B. Baum, but with him I
have not yet spoken. (Still reading through Baum's book, but he quotes the
right people.)

Here are the (G), (H), and (I) I'd add to your list. I wouldn't be surprised
to find that in another two years I add a (J) and (K). Some roads go on
forever, and others just go on a very long way, and sometimes also roads are
much less long than they look. Some roads we never know the length until we
have arrived at our destination, and look back on our pathway; and I am not
yet at my destination.


The disciples of Tversky and Kahneman, they understand the details of human
reasoning well enough to illustrate exactly how it fails, not just try to
explain the mysteries of its success. Those well-rounded in this field study
social psychology as well as heuristics and biases, and they overlap
communities (H) and (I) on Bayesian rationality and evolutionary psychology
respectively. They know the conjunction fallacy and sometimes they even avoid
it; they know things about human overconfidence that would freeze the blood of
most people who think themselves pessimists. Infant and developmental
psychology is of particular interest to AI builders.


The students of an ancient art devised by Laplace, which is therefore called
Bayesian. Probability theory, decision theory, information theory,
statistics; Kolmogorov and Solomonoff, Jaynes and Shannon. The masters of
this art can describe ignorance more precisely than most folk can describe
their knowledge, and if you don't realize that's a pragmatically useful
mathematics then you aren't in community (H). These are the people to whom
"intelligence" is not a sacred mystery... not to some of us, anyway.


Down with the Standard Social Sciences Model! Long live the Unified Causal
Model! If you recognized that as a parody of Tooby and Cosmides, you still
may not know anywhere near as much as you think you do about evolutionary
biology - not unless you know the difference between Hardy-Weinberg
equilibrium and linkage equilibrium, or you can show how Price's Equation
generalizes Fisher's Fundamental Theorem of Natural Selection. Those who seek
to build AI will have studied the evolutionary anthropology of human
intelligence, a la Terrence Deacon. But there is a more important use for
evolutionary biology. There are two known, studied, powerful optimization
processes in this world: cumulative natural selection, and the human mind.
And interestingly enough, science understands natural selection a lot more
solidly than it understands humans. The reason is simple enough; natural
selection is a much simpler optimization process - so simple, in fact, that it
can't help but accrete needlessly complex processes like human intelligence.
If you really want to learn how not to anthropomorphize, study evolutionary
biology with math.

Loosemore, I commend to you the following documents with respect to community
(H) of which you are most sorely in need of joining, followed thereafter by
(I) and (G).

Having read the second document, you will understand what is wrong, as a
matter of scientific procedure and rational reasoning, with attributing human
intelligence to "the emergent properties of a hypercomplex network" as though
this were a hypothesis.

You can skip the first document if you understand simple Bayesian reasoning
*thoroughly*, by which I mean that you can write Bayes's Rule from memory and
that you know why transforming probabilities to log-odds ratios can simplify

You may also be interested in the evolutionary psychology of human
intelligence and my own take on human concepts, findable at the now-obsolete:

> Those who would understand completely if you started discussing
> "Shiffrin and Schneider", or "Levels of Processing Theory", or "Deep
> Dyslexia." They would also what a word-exhange error, a
> morpheme-exchange error or a semantic substitution was. They would
> understand the power-law of skill acquisition. They would know about
> "Smith and Medin" if you asked them for their thoughts about classical,
> prototype and feature theories of concepts, and what the current think
> was on those issues, they might not be able to answer immediately, but
> they would know what you were asking for, and would be able track it
> down pretty quickly. And if you mentioned "motivation", "compulsion",
> "obsession" or "pleasure" they would assume you were just using these as
> shorthand for certain mechanisms, rather than assuming you were talking
> dualist philosophy.

Smith and Medin? Eleanor Rosch or George Lakoff would be more appropriate
names to cite here, no? You seem to have gotten stuck in an eddy of this
field; your selected highlights don't sound like central exemplars of the
category. And why do you lump motivation in with that?

It sounds like your community (A) is intended to stand for the whole of
cognitive psychology, a field which includes also (G) as a special case. If
so, you need to study way more cognitive psychology. A fine and classic book
is "Judgment Under Uncertainty" from (G).

> Those who know how to write serious amounts of LISP, who understand
> pruning algorithms in state-space search, and who know the difference
> between Blackboards, GAs, and neural nets. They might well be able to
> tell you about "unification" in Prolog, and be able to give you a
> thoughtful discussion of the differences between John Koza's genetic
> programming and genetic algorithms and evolutionary programming. They
> would definitely know about goal hierarchies and planning systems.

Also known as Mainstream AI: the predicate logic users, connectionists, and
artificial evolutionists. What they know about goal hierarchies and planning
systems is quite different from what decision theorists know about expected
utility maximization, though of course there's some overlap.

I note that FAI issues lie much closer to decision theory. It is better, in
discussing FAI, to know who first wrote about Newcomb's Problem than to know
who built SHRDLU.

I once wrote of this field that it is important primarily as a history of failure.

> Those to whom the term "edge of chaos" is not just something they
> learned from James Gleick. These people are comfortable with the idea
> that mathematics is a fringe activity that goes on at the tractable edge
> of a vast abyss of completely intractable systems and equations. When
> they use the term "non-linear" they don't mean something that is not a
> straight line, nor are they talking about finding tricks that yield
> analytic solutions to certain nonlinear equations. They are equally
> comfortable talking about a national economy and a brain as a "CAS" and
> they can point to meaningful similarities in the behavior of these two
> sorts of system. Almost all of these people are seriously well versed
> in mathematics, but unlike the main body of mathematicians proper, they
> understand the limitations of analytic attempts to characterize systems
> in the real world.

I'm not part of community C and I maintain an extreme skepticism of its
popular philosophy, as opposed to particular successful technical
applications, for reasons given in "A Technical Explanation of Technical

> Those who have had the kind of experience in which they find themselves
> fifty levels deep in a debugger, working on Final Candidate 7 of a piece
> of software comprising 4000 source files of C and C++, in a hopeless
> attempt to troubleshoot problems in a codebase that they have only
> written tiny parts of, and in which the rest is mostly undocumented and
> barely commented (by people who often did not have much English), with
> the product due to ship in two weeks. An experience that bears about as
> much relationship to a computer science degree as Mrs Featherstone's
> Finishing School For Young Ladies does to a whorehouse.

I've stayed up 36 hours in a row, twelve levels deep in a debugger, hunting a
mysterious stack smasher in a C++ application with at least 200 source files,
which I did write myself. Close enough. Long live Python!

> Those who could give you a reasonable account of where Penrose, Chalmers
> and Dennett would stand with respect to one another. They could easily
> distinguish the Hard Problem from other versions of the consciousness
> issue, even if they might disagree with Chalmers about the conclusion to
> be drawn. They know roughly what supervenience is. The could certainly
> distinguish functionalism (various breeds thereof) from epiphenomenalism
> and physicalism, and they could talk about what various camps thought
> about the issues of dancing, inverted and absent qualia.

Sadly I recognize every word and phrase in this paragraph, legacy of a wasted
childhood, like being able to sing the theme song from Thundercats.

> Those who know enough about real neural hardware to think that there are
> serious questions about whether the real computation takes place in
> floods of junk in a synaptic cleft or in specific timings of incoming
> spikes in the dendritic tree. They know about programmed cell death and
> how that might relate to learning. They might know about the
> Hodgkin-Huxley equations. They understand what Marr had to say about
> the possible role of Purkinje cells in fine motor control, and they
> would know way too much about the architectural features of the brain.

We *know* that dendritic computing exists, it's no longer a "serious question"
but an answered one.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT