Re: Retrenchment

From: Richard Loosemore (
Date: Thu Aug 25 2005 - 20:15:39 MDT


I really did not want that "Retrenchment" post to be taken as a piece of
oneupmanship and name-dropping. It was meant to be an observation about
what I saw as the sad state of a field of inquiry ... it is a shame that
some people reacted as if it was nothing but a personal insult.

Of the comments you make below, some appear to be of the sort "You
mentioned x and y in a certain field, but you didn't mention z ... don't
you know about z?".

For example:

> Smith and Medin? Eleanor Rosch or George Lakoff would be more
> appropriate names to cite here, no? You seem to have gotten stuck in
> an eddy of this field; your selected highlights don't sound like
> central exemplars of the category. And why do you lump motivation in
> with that?

The things I cited were randomly selected examples of topics in the
field of Cognitive Psychology/Cognitive Science. Am I to be faulted for
only selecting those? Did you really expect me to list every single
topic of interest in that domain?

Later, you cite a social psychology experiment in which people judge
each other's intelligence after asking questions, and you conclude with
the interesting remark:

> The moral is that you can look
> very intelligent by asking people hard-to-answer questions.
> I expect you were not aware of this phenomenon, nor deliberately
> trying to exploit this known human bias. But as for me, I recognized
> more than half of the obscure names you said, and less than all, and I
> know how good a showing that *really* is - after discounting the
> effects of the standard human bias which this situation happens to
> match.

I did not try to "look intelligent" and I did not, in fact, mention any
"obscure names". The purpose of that post was to define communities of
people who were fluent in various fields, and I tried to define those
communities by giving a kind of litmus test - lists of topics that a
person in that field would instantly recognize as elementary, not
"obscure names".

> Having said that, then by all means, if it comes to showing off, two
> can play at *that* game.

I did not have this kind of puerile, testosterone-ridden jousting in
mind when I made the original post, but if you want to sidetrack the
discussion in that direction, I will try to respond as best I can.....




:-) :-) :-) :-) :-) :-)

Eliezer S. Yudkowsky wrote:
> Shades of the laundry list in
> Richard Loosemore wrote:
> > ************************************************************
> >
> > Now, some of these communities are more directly hands-on, while some
> > just watch and comment and contribute from afar, but the six different
> > language that they speak and the six different paradigms they bring to
> > the table are all in some way relevant to the task of understanding how
> > cognitive systems might work, and how we might go about building an AI.
> >
> > But the problem is that you can go into one of these communities and
> > find very talented people who are completely ignorant of what is going
> > on in the others. Often, it is not just ignorance but actual scorn and
> > disdain, as if they are proud not to know what is happening elsewhere,
> > because they regard the ideas (and sometimes the people) in some of
> > those other communities as irrelevant or stupid. Frequently, people
> > have a smattering of some other field and think that they therefore know
> > it all.
> Some roads go on, if not forever, then a long long way. It's a funny
> thing, you know; I was just about to ask you if you belonged to some
> communities, but apart from neuroscience they weren't on your list.
> Incidentally, a decent neuroscience-user ought to know many special
> cases of human brain damage, and you did not say of that community that
> they would recognize the name of Phineas Gage. I mention this because
> studying human brain-damage cases can also help defeat anthropomorphism.
> I recall from memory - though not, I fear, complete with a full citation
> - an experiment done in social psychology. A person X asked another
> person Y a set of far-ranging questions, which could be drawn from any
> knowledge X had managed to accumulate, in front of another person Z.
> Naturally, Y didn't know the answers to nearly all of the questions. X
> and Y were randomly assigned to their roles, and there are *very* few
> people who know so much that they can readily answer questions from
> *any* other person's specialty. The interesting part was when they
> asked the participants whether X or Y looked more intelligent. X
> usually said that he did not think he had looked particularly
> intelligent; Y said that X looked somewhat more intelligent; Z said that
> X looked very much more intelligent. The moral is that you can look
> very intelligent by asking people hard-to-answer questions.
> I expect you were not aware of this phenomenon, nor deliberately trying
> to exploit this known human bias. But as for me, I recognized more than
> half of the obscure names you said, and less than all, and I know how
> good a showing that *really* is - after discounting the effects of the
> standard human bias which this situation happens to match.

You only recognised "more than half" of the "obscure names"? That's a
pity: I knew all of them.

Which of them did you find obscure? Did you only know half of each, or
all of some and nothing of some others? Can I help with some of the
ones where you feel out of your depth?

> Having said that, then by all means, if it comes to showing off, two can
> play at *that* game.
> I have never encountered someone who might qualify as a member of all
> the communities I think to be necessary. Possibly Eric B. Baum, but
> with him I have not yet spoken. (Still reading through Baum's book, but
> he quotes the right people.)

Ahh! Eric B. Baum. You were impressed by him, huh? So you must be one
of those people who were fooled when he tried to explain qualia as a
form of mechanism, calling this an answer to the "hard problem" [of
consciousness] and making all the people who defined the term "hard
problem of consciousness" piss themselves with laughter at his stupidity?

Baum really looks pretty impressive if you don't read his actual words
too carefully, doesn't he?

> Here are the (G), (H), and (I) I'd add to your list. I wouldn't be
> surprised to find that in another two years I add a (J) and (K). Some
> roads go on forever, and others just go on a very long way, and
> sometimes also roads are much less long than they look. Some roads we
> never know the length until we have arrived at our destination, and look
> back on our pathway; and I am not yet at my destination.
> The disciples of Tversky and Kahneman, they understand the details of
> human reasoning well enough to illustrate exactly how it fails, not just
> try to explain the mysteries of its success. Those well-rounded in this
> field study social psychology as well as heuristics and biases, and they
> overlap communities (H) and (I) on Bayesian rationality and evolutionary
> psychology respectively. They know the conjunction fallacy and
> sometimes they even avoid it; they know things about human
> overconfidence that would freeze the blood of most people who think
> themselves pessimists. Infant and developmental psychology is of
> particular interest to AI builders.

That's funny: didn't you recognize that this is precisely the same as
the Community A that I defined? This is Cognitiuve Science/Cognitive
Psychology. Why would you think that Tversky and Kahneman are some kind
of separate community?

Why would you think that I don't know all of this stuff?

> The students of an ancient art devised by Laplace, which is therefore
> called Bayesian. Probability theory, decision theory, information
> theory, statistics; Kolmogorov and Solomonoff, Jaynes and Shannon. The
> masters of this art can describe ignorance more precisely than most folk
> can describe their knowledge, and if you don't realize that's a
> pragmatically useful mathematics then you aren't in community (H).
> These are the people to whom "intelligence" is not a sacred mystery...
> not to some of us, anyway.

Boy, you are so right there! They don't think of intelligence as a
sacred mystery, they think it is so simple, it only involves Bayesian

Like Behaviorists and Ptolemaic Astronomers, they mistake a formalism
that approximately describes a system for the mechanism that is actually
inside the system. They can carry on like this for centuries, adding
epicycles onto their models in order to refine them. When Bayesian
Inference does not seem to cut it, they assert that *in principle* a
sufficiently complex Bayesian Inference system really would be able to
cut it ... but they are not able to understand that the "in principle"
bit of their argument depends on subtleties that they don't think much

In particular, they don't notice when the mechanism that is supposed to
do the mapping between internal symbols and external referents, in their
kind of system, turns out to require more intelligence than the
reasoning engine itself .... and they usually don't notice this because
they write all their programs with programmer-defined symbols/concepts
(implictly inserting the intelligence themselves, you see), thus sparing
their system the pain of doing the work necessary to ground itself.

If these people understood what was going on in the other communities,
they might understand these issues. Typically, they don't

> Down with the Standard Social Sciences Model! Long live the Unified
> Causal Model! If you recognized that as a parody of Tooby and Cosmides,
> you still may not know anywhere near as much as you think you do about
> evolutionary biology - not unless you know the difference between
> Hardy-Weinberg equilibrium and linkage equilibrium, or you can show how
> Price's Equation generalizes Fisher's Fundamental Theorem of Natural
> Selection. Those who seek to build AI will have studied the
> evolutionary anthropology of human intelligence, a la Terrence Deacon.
> But there is a more important use for evolutionary biology. There are
> two known, studied, powerful optimization processes in this world:
> cumulative natural selection, and the human mind. And interestingly
> enough, science understands natural selection a lot more solidly than it
> understands humans. The reason is simple enough; natural selection is a
> much simpler optimization process - so simple, in fact, that it can't
> help but accrete needlessly complex processes like human intelligence.
> If you really want to learn how not to anthropomorphize, study
> evolutionary biology with math.

Here you remind me of John Searle, famous Bete Noir of the AI community,
who will probably never understand the "levels" difference between
systems that *are* intelligent (e.g. humans) and systems that are
collections of interacting intelligences (e.g. human societies) and,
jumping up almost but not quite a whole level again, systems that are
interacting species of "intelligences" (evolution).

This is one of the silliest mistakes that a person interested in AI
could make. You know Conway's game of Life? I've got a dinky little
simulation here on this machine that will show me a whole zoo of gliders
and loaves and glider guns and traffic lights and whatnot. Demanding
that an AI person should study "evolutionary biology with math" is about
as stupid as demanding that someone interested in the structure of
computers should study every last detail of the gliders, loaves, glider
guns and traffic lights, etc. in Conway's Life.

Level of description fallacy. Searle fell for it when he invented his
ridiculous Chinese Room. Why would yo make the same dumb mistake?

> Loosemore, I commend to you the following documents with respect to
> community (H) of which you are most sorely in need of joining, followed
> thereafter by (I) and (G).
> Having read the second document, you will understand what is wrong, as a
> matter of scientific procedure and rational reasoning, with attributing
> human intelligence to "the emergent properties of a hypercomplex
> network" as though this were a hypothesis.

I did read the documents. I knew about Bayes Theorem already.

A lot of sound and fury, signifying nothing. You have no real
conception of the limitations of mathematics, do you? You don't seem to
understand that the forces that shape thought and the forces that shape
our evaluation of technical theories (each possibly separate forces,
though related) might not be governed by your post-hoc bayesian analysis
of them. That entire concept of the separation between an approximate
description of a process and the mechanisms that actually *is* the
process, is completely lost on you. Your technical.html document is one
long stream-of-consciousness eulogy to the perfection of mathematics as
applied to a couple of particular domains (thought, and the nature of
scientific inquiry), without any indication that you can recite back to
me clearly and succinctly the issue that a number of people would raise
against your eulogy. You cannot even comprehend this issue.

[Oh, I forgot: I know mathematics too. I took it as far as some
postgraduate courses in General Relativity and Mathematical Foundations
of Quantum Mechanics. I can see why you started this whole "I can be
smarter than you thing" .... its a heck of a lot of fun to let your hair
down occasionally, isn't it!! I am normally very modest, so I am
grateful that you gave me such a perfect excuse to be this silly for a

Isn't it a little arrogant to tell me to read stuff that I already
understand, when you have a little catching up to do in the domains
that, earlier, you said you only "half" knew?

Especially since the very powerful arguments that people in other fields
have levelled against the kind of approach to AI typified by Bayesian
Inference cannot be understood or discussed unless you are familiar with
those other fields?

What if knowledge of all the six domains I mentioned gave a person the
tools to understand that your approach was doomed? That was my original

> You can skip the first document if you understand simple Bayesian
> reasoning *thoroughly*, by which I mean that you can write Bayes's Rule
> from memory and that you know why transforming probabilities to log-odds
> ratios can simplify bookkeeping.
> You may also be interested in the evolutionary psychology of human
> intelligence and my own take on human concepts, findable at the
> now-obsolete:
>> Those who would understand completely if you started discussing
>> "Shiffrin and Schneider", or "Levels of Processing Theory", or "Deep
>> Dyslexia." They would also what a word-exhange error, a
>> morpheme-exchange error or a semantic substitution was. They would
>> understand the power-law of skill acquisition. They would know about
>> "Smith and Medin" if you asked them for their thoughts about
>> classical, prototype and feature theories of concepts, and what the
>> current think was on those issues, they might not be able to answer
>> immediately, but they would know what you were asking for, and would
>> be able track it down pretty quickly. And if you mentioned
>> "motivation", "compulsion", "obsession" or "pleasure" they would
>> assume you were just using these as shorthand for certain mechanisms,
>> rather than assuming you were talking dualist philosophy.
> Smith and Medin? Eleanor Rosch or George Lakoff would be more
> appropriate names to cite here, no? You seem to have gotten stuck in an
> eddy of this field; your selected highlights don't sound like central
> exemplars of the category. And why do you lump motivation in with that?
> It sounds like your community (A) is intended to stand for the whole of
> cognitive psychology, a field which includes also (G) as a special
> case. If so, you need to study way more cognitive psychology. A fine
> and classic book is "Judgment Under Uncertainty" from (G).

As I said at the outset, this is foolishness. I have read "Judgment
Under Uncertainty", and Lakoff's "Women, Fire and Dangerous Things" ...
sitting there on the shelf behind me.

Why do I "need to study way more cognitive psychology"?

>> Those who know how to write serious amounts of LISP, who understand
>> pruning algorithms in state-space search, and who know the difference
>> between Blackboards, GAs, and neural nets. They might well be able to
>> tell you about "unification" in Prolog, and be able to give you a
>> thoughtful discussion of the differences between John Koza's genetic
>> programming and genetic algorithms and evolutionary programming. They
>> would definitely know about goal hierarchies and planning systems.
> Also known as Mainstream AI: the predicate logic users, connectionists,
> and artificial evolutionists. What they know about goal hierarchies and
> planning systems is quite different from what decision theorists know
> about expected utility maximization, though of course there's some overlap.

I was conjoining the decision theorists with the hard AI group, since
most of the AI people I know are perfectly familiar with the latter.

> I note that FAI issues lie much closer to decision theory. It is
> better, in discussing FAI, to know who first wrote about Newcomb's
> Problem than to know who built SHRDLU.
> I once wrote of this field that it is important primarily as a history
> of failure.
>> Those to whom the term "edge of chaos" is not just something they
>> learned from James Gleick. These people are comfortable with the idea
>> that mathematics is a fringe activity that goes on at the tractable
>> edge of a vast abyss of completely intractable systems and equations.
>> When they use the term "non-linear" they don't mean something that is
>> not a straight line, nor are they talking about finding tricks that
>> yield analytic solutions to certain nonlinear equations. They are
>> equally comfortable talking about a national economy and a brain as a
>> "CAS" and they can point to meaningful similarities in the behavior of
>> these two sorts of system. Almost all of these people are seriously
>> well versed in mathematics, but unlike the main body of mathematicians
>> proper, they understand the limitations of analytic attempts to
>> characterize systems in the real world.
> I'm not part of community C and I maintain an extreme skepticism of its
> popular philosophy, as opposed to particular successful technical
> applications, for reasons given in "A Technical Explanation of Technical
> Explanation".

You speak from a profound lack of depth in the one field where depth of
understanding is most important. You mistake "particular successful
technical applications" for the issues of most importance to AI.

There is nothing wrong with skepticism. If it is based on
understanding, rather than wilful, studied ignorance.

>> Those who have had the kind of experience in which they find
>> themselves fifty levels deep in a debugger, working on Final Candidate
>> 7 of a piece of software comprising 4000 source files of C and C++, in
>> a hopeless attempt to troubleshoot problems in a codebase that they
>> have only written tiny parts of, and in which the rest is mostly
>> undocumented and barely commented (by people who often did not have
>> much English), with the product due to ship in two weeks. An
>> experience that bears about as much relationship to a computer science
>> degree as Mrs Featherstone's Finishing School For Young Ladies does to
>> a whorehouse.
> I've stayed up 36 hours in a row, twelve levels deep in a debugger,
> hunting a mysterious stack smasher in a C++ application with at least
> 200 source files, which I did write myself. Close enough. Long live
> Python!

You wrote the 200 files yourself? So you didn't have to navigate
through a codebase of 4000 files written by someone else? What are
you, wet behind the ears? :-)

Corel abandoned the Mac version of CorelDraw a couple of years ago,
after putting me and my buddies through hell to get it ported across
from Windows. Their own team had tried to port it once, and failed.
Then they tried a second time, and failed again. The third time they
asked for help, and the outfit I worked for took on the job and
succeeded. Corel went through another four or five upgrades before they
abandoned it forever.

I know why ;-).

>> Those who could give you a reasonable account of where Penrose,
>> Chalmers and Dennett would stand with respect to one another. They
>> could easily distinguish the Hard Problem from other versions of the
>> consciousness issue, even if they might disagree with Chalmers about
>> the conclusion to be drawn. They know roughly what supervenience is.
>> The could certainly distinguish functionalism (various breeds thereof)
>> from epiphenomenalism and physicalism, and they could talk about what
>> various camps thought about the issues of dancing, inverted and absent
>> qualia.
> Sadly I recognize every word and phrase in this paragraph, legacy of a
> wasted childhood, like being able to sing the theme song from Thundercats.

Shame. There is valuable stuff buried in among the dross.

>> Those who know enough about real neural hardware to think that there
>> are serious questions about whether the real computation takes place
>> in floods of junk in a synaptic cleft or in specific timings of
>> incoming spikes in the dendritic tree. They know about programmed
>> cell death and how that might relate to learning. They might know
>> about the Hodgkin-Huxley equations. They understand what Marr had to
>> say about the possible role of Purkinje cells in fine motor control,
>> and they would know way too much about the architectural features of
>> the brain.
> We *know* that dendritic computing exists, it's no longer a "serious
> question" but an answered one.

What are you talking about? Do you actually read my words, or just some
fantasy version of what you think I wrote?

I didn't say that dendritic computing was a serious question, I said
that "whether the *real computation* takes place [...] in the dendritic
tree" was a serious question. Totally different issue.

So, that pretty much wraps it up for your extra communities that you
would like everyone to know about. One was identical to the first one
on my list, one is a narrow quasi-mathematical community who live in a
world of their own, but who DO NEED TO BE UNDERSTOOD, AND SHOULD NOT BE
IGNORED .... the only problem is that when you understand them, they
don't understand you, and they cannot understand that they are building
castles on sand. And the other one was almost completely irrelevant -
fun to keep up with, but no help.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT