From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 05 2005 - 15:14:49 MDT
Richard Loosemore wrote:
>
> Ahh! Eric B. Baum. You were impressed by him, huh? So you must be one
> of those people who were fooled when he tried to explain qualia as a
> form of mechanism, calling this an answer to the "hard problem" [of
> consciousness] and making all the people who defined the term "hard
> problem of consciousness" piss themselves with laughter at his stupidity?
>
> Baum really looks pretty impressive if you don't read his actual words
> too carefully, doesn't he?
Sure, I was pleasantly surprised by Baum. Baum had at least one new
idea and said at least one sensible thing about it, a compliment I'd pay
also to Jeff Hawkins. I don't expect anyone to get everything right. I
try to credit people for getting a single thing right, or making
progress on a problem, as otherwise I'd never be able to look favorably
on anyone.
Did Baum's explanation of consciousness dissipate the mystery? No, it
did not; everything Baum said was factually correct, but people confused
by the hard problem of consciousness would be just as confused after
hearing Baum's statements. I agree that Baum failed to dissipate the
apparent mystery of Chalmers's hard problem. Baum said some sensible
things about Occam's Razor, and introduced me to the notion of VC
dimension; VC dimension isn't important for itself, but it got me to
think about Occam's Razor in terms of the range of possibilities a
hypothesis class can account for, rather than the bits required to
describe an instance of a hypothesis class.
>> COMMUNITY (H)
>>
>> The students of an ancient art devised by Laplace, which is therefore
>> called Bayesian. Probability theory, decision theory, information
>> theory, statistics; Kolmogorov and Solomonoff, Jaynes and Shannon.
>> The masters of this art can describe ignorance more precisely than
>> most folk can describe their knowledge, and if you don't realize
>> that's a pragmatically useful mathematics then you aren't in community
>> (H). These are the people to whom "intelligence" is not a sacred
>> mystery... not to some of us, anyway.
>
> Boy, you are so right there! They don't think of intelligence as a
> sacred mystery, they think it is so simple, it only involves Bayesian
> Inference!
Not at all. The enlightened use Bayesian inference and expected utility
maximization to measure the power of an intelligence. Like the
difference between understanding how to measure aerodynamic lift, and
knowing how to build an airplane. If you don't know how to measure
aerodynamic lift, good luck building an airplane. Knowing how to
measure success isn't enough to succeed, but it sure helps.
> Like Behaviorists and Ptolemaic Astronomers, they mistake a formalism
> that approximately describes a system for the mechanism that is actually
> inside the system. They can carry on like this for centuries, adding
> epicycles onto their models in order to refine them. When Bayesian
> Inference does not seem to cut it, they assert that *in principle* a
> sufficiently complex Bayesian Inference system really would be able to
> cut it ... but they are not able to understand that the "in principle"
> bit of their argument depends on subtleties that they don't think much
> about.
There are subtleties to real-world intelligence that don't appear in
standard Bayesian decision theory (he said controversially), but
Bayesian decision theory can describe a hell of a lot more than naive
students think. I bet that if you name three subtleties, I can describe
how Bayes plus expected utility plus Solomonoff (= AIXI) would do it
given infinite computing power.
> In particular, they don't notice when the mechanism that is supposed to
> do the mapping between internal symbols and external referents, in their
> kind of system, turns out to require more intelligence than the
> reasoning engine itself .... and they usually don't notice this because
> they write all their programs with programmer-defined symbols/concepts
> (implictly inserting the intelligence themselves, you see), thus sparing
> their system the pain of doing the work necessary to ground itself.
Historically true. I remember reading Jaynes and snorting mentally to
myself as Jaynes described a "robot" which came complete with
preformulated hypotheses. But it's not as if Jaynes was trying to build
a Friendly AI. I don't expect Jaynes to know that stuff, just to get
his probability theory right.
I note that the mechanism that maps from internal symbols to external
referents is readily understandable in Bayesian terms. AIXI can learn
to walk across a room.
I also note that I have written extensively about this very problem in
"Levels of Organization in General Intelligence".
> If these people understood what was going on in the other communities,
> they might understand these issues. Typically, they don't.
Again, historically true. Jaynes was a physicist before he was a
Bayesian, but I've no reason to believe he ever studied, say, visual
neurology.
So far as I've heard, in modern-day science, individuals are polymaths,
not communities.
If I despised the communities for that, I couldn't assemble the puzzle
pieces from each isolated community.
> Here you remind me of John Searle, famous Bete Noir of the AI community,
> who will probably never understand the "levels" difference between
> systems that *are* intelligent (e.g. humans) and systems that are
> collections of interacting intelligences (e.g. human societies) and,
> jumping up almost but not quite a whole level again, systems that are
> interacting species of "intelligences" (evolution).
>
> This is one of the silliest mistakes that a person interested in AI
> could make. You know Conway's game of Life? I've got a dinky little
> simulation here on this machine that will show me a whole zoo of gliders
> and loaves and glider guns and traffic lights and whatnot. Demanding
> that an AI person should study "evolutionary biology with math" is about
> as stupid as demanding that someone interested in the structure of
> computers should study every last detail of the gliders, loaves, glider
> guns and traffic lights, etc. in Conway's Life.
>
> Level of description fallacy. Searle fell for it when he invented his
> ridiculous Chinese Room. Why would yo make the same dumb mistake?
Ho, ho, ho! Let me look up the appropriate response in my handbook...
ah, yes, here it is: "You speak from a profound lack of depth in the
one field where depth of understanding is most important." Evolutionary
biology with math isn't 'most important', but it sure as hell is important.
To try to understand human intelligence without understanding natural
selection is hopeless.
To try to understand optimization, you should study more than one kind
of powerful optimization process. Natural selection is one powerful
optimization process. Human intelligence is another. Until you have
studied both, you have no appreciation of how different two optimization
processes can be. "A barbarian is one who thinks the customs of his
island and tribe are the laws of nature." To cast off the human island,
you study evolutionary biology with math.
I know about separate levels of description. I'm not telling you that
ev-bio+math is how intelligence works. I'm telling you to study
ev-bio+math anyway, because it will help you understand human
intelligence and general optimization. After you have studied, you will
understand why you needed to study.
I should note that despite my skepticism, I'm quite open to the
possibility that if I study complexity theory, I will afterward slap
myself on the forehead and say, "I can't believe I tried to do this
without studying complexity theory." The question I deal with is
deciding where to spend limited study time - otherwise I'd study it all.
> I did read the documents. I knew about Bayes Theorem already.
Good for you. You do realize that Bayesian probability theory
encompasses a lot more territory than Bayes's Theorem?
> A lot of sound and fury, signifying nothing. You have no real
> conception of the limitations of mathematics, do you?
Yeah, right, a 21st-century human is going to know the "limitations" of
mathematics. After that, he'll tell me the limitations of science,
rationality, skepticism, observation, and reason. Because if he doesn't
see how to do something with mathematics, it can't be done.
> You don't seem to
> understand that the forces that shape thought and the forces that shape
> our evaluation of technical theories (each possibly separate forces,
> though related) might not be governed by your post-hoc bayesian analysis
> of them. That entire concept of the separation between an approximate
> description of a process and the mechanisms that actually *is* the
> process, is completely lost on you.
I understand quite well the difference between an approximation and an
ideal, or the difference between a design goal and a design. I won't
say it's completely futile to try to do without knowing what you're
doing, because some technology does get built that way. But my concern
is Friendly AI, not AI, so I utterly abjured and renounced my old ideas
of blind exploration. From now on, I said to myself, I understand
exactly what I'm doing *before* I do it.
> As I said at the outset, this is foolishness. I have read "Judgment
> Under Uncertainty", and Lakoff's "Women, Fire and Dangerous Things" ...
> sitting there on the shelf behind me.
Okay, you passed a couple of spot-checks, you're not a complete waste of
time.
Though you still seem unclear on the realization that polymaths all
study *different* fields, so there's nothing impressive about being able
to name different communities. Anyone can rattle off the names of some
obscure books they've read. It's being able to answer at least some of
the time when someone else picks the question, that implies you're
getting at least a little real coverage. You seem to have some myopia
with respect to this, asking me why I was telling you to study
additional fields when I hadn't even studied every single one you'd
already studied. Some roads go on a long, long way. That *you* can
name things you've studied isn't impressive. Having Lakoff on the shelf
behind you does imply you're not a total n00bie, not because Lakoff is
so important, but because I selected the question instead of you.
>> Also known as Mainstream AI: the predicate logic users,
>> connectionists, and artificial evolutionists. What they know about
>> goal hierarchies and planning systems is quite different from what
>> decision theorists know about expected utility maximization, though of
>> course there's some overlap.
>
> I was conjoining the decision theorists with the hard AI group, since
> most of the AI people I know are perfectly familiar with the latter.
Non sequitur. AI people may know some decision theory, it doesn't mean
that decision theory is identical to AI.
I would guess that not many AI people can spot-read the difference between:
p(B|A)
p(A []-> B)
>>> COMMUNITY (C)
>>>
>>> Those to whom the term "edge of chaos" is not just something they
>>> learned from James Gleick. These people are comfortable with the
>>> idea that mathematics is a fringe activity that goes on at the
>>> tractable edge of a vast abyss of completely intractable systems and
>>> equations. When they use the term "non-linear" they don't mean
>>> something that is not a straight line, nor are they talking about
>>> finding tricks that yield analytic solutions to certain nonlinear
>>> equations. They are equally comfortable talking about a national
>>> economy and a brain as a "CAS" and they can point to meaningful
>>> similarities in the behavior of these two sorts of system. Almost
>>> all of these people are seriously well versed in mathematics, but
>>> unlike the main body of mathematicians proper, they understand the
>>> limitations of analytic attempts to characterize systems in the real
>>> world.
>>
>> I'm not part of community C and I maintain an extreme skepticism of
>> its popular philosophy, as opposed to particular successful technical
>> applications, for reasons given in "A Technical Explanation of
>> Technical Explanation".
>
> You speak from a profound lack of depth in the one field where depth of
> understanding is most important. You mistake "particular successful
> technical applications" for the issues of most importance to AI.
>
> There is nothing wrong with skepticism. If it is based on
> understanding, rather than wilful, studied ignorance.
"Shut up and learn" is a plaint to which I am, in general, prepared to
be sympathetic. But you're not the only one with recommendations. Give
me one example of a technical understanding, one useful for making
better-than-random guesses about specific observable outcomes, which
derives from chaos theory. Phil Goetz, making stuff up at random, gave
decent examples of what I'm looking for. Make a good enough case, and
I'll put chaos theory on the head of my menu.
Many people are easily fooled into thinking they have attained some
tremendously important and significant understanding of something they
are still giving maxentropy probability distributions about, the qualia
crowd being one obvious example. Show me this isn't so of chaos theory.
>>> Those who could give you a reasonable account of where Penrose,
>>> Chalmers and Dennett would stand with respect to one another. They
>>> could easily distinguish the Hard Problem from other versions of the
>>> consciousness issue, even if they might disagree with Chalmers about
>>> the conclusion to be drawn. They know roughly what supervenience
>>> is. The could certainly distinguish functionalism (various breeds
>>> thereof) from epiphenomenalism and physicalism, and they could talk
>>> about what various camps thought about the issues of dancing,
>>> inverted and absent qualia.
>>
>> Sadly I recognize every word and phrase in this paragraph, legacy of a
>> wasted childhood, like being able to sing the theme song from
>> Thundercats.
>
> Shame. There is valuable stuff buried in among the dross.
If you've read _Technical Explanation_, you know my objection.
Mysterious answers to mysterious questions. "Qualia" reifies the
confusion into a substance, as did "phlogiston" and "elan vital".
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT