Reductionism (was: future of human evolution)

From: Eliezer Yudkowsky (
Date: Sat Oct 02 2004 - 11:32:04 MDT

Marc Geddes wrote:
> Suppose I asked what the explanation was for why molecules on the tip of
> my nose were at a certain location in space at 7.15pm tonight. A full
> physics description of the motions of all the atoms in the universe and
> the forces between them would yield almost no understanding. The
> *causal description* which be just that: it would be a hugely
> complicated quantum equation charting the motions of the molecules on
> the tip of my nose, which would indicate with a certain probability
> where these molecules would be at 7.15pm. Only in one very narrow sense
> is there any explanation for the location. Certainly not a full
> explanation.
> On the other hand a perfectly good explanation could be given WITHOUT
> deriving anything from physics.
> For instance someone with knowledge of my habits knows that I usually
> sit down to the surf the net just after dinner time, and I do this at
> the same place (where the computer is located). So the explanation for
> the location of the molecules on the tip of my nose at 7.15pm follows
> simply from the fact these molecules are connected to the entity known
> as 'Marc Geddes' with known habits. No (or at least very little)
> physics needed. And in fact this explanation is *better* than the
> physics explanation.

Solomonoff induction is Bayesian probability plus a distribution over
priors that says that explanations with lower Kolmogorov complexity are
more probable, formalizing Occam's Razor.

Solomonoff induction or even decent approximations of Solomoff induction
are intractable for human minds, because very simple explanations can be
intractably expensive to compute all the way out to actual predictions.
Physics is extremely simple (in Kolmogorov complexity) and explains
literally everything, but physics is unimaginably expensive to compute.

Now, you, a human, can come up with a short explanation that is more
cheaply computable than physics. And this explanation seems good to you,
since it enables you to actually compute a prediction and manipulate
reality, where the underlying equations of physics are intractable to you.
But reality itself does not care about your cheapo explanation. Reality
cares only about the explanation that the universe itself uses, that
incredibly intractable process of quantum mechanics. The computation you
have in your mind that lets you cheaply predict your nose is not part of
your nose; it exists only in the particles making up your brain. Your
actual nose uses only physics. And indeed, could you pay the cost of
computing, the equations of quantum mechanics would give you a more precise
(better calibrated) predictive distribution over the possible locations of
your nose, than anything you could do with mere explanation. Even if the
explanation is simpler in the Kolmogorov sense, and not just cheaper to
compute, reality itself still doesn't care about anything but physics.

That's it. That's the whole fuss silly philosophers make over reductionism
and holism. A computationally intractable physical phenomenon can have
regularities that enable human minds to produce "explanations", but the
explanations are still only in the mind, not in the underlying physics.
The explanations are not anything above or beyond the physics. They are
not anything extra that must be added to the physics. They are just
cheaper ways for humans to compute (almost) the same results.

To understand a physical process is to absorb enough regularities in it
that you can predict and manipulate it in real time. To understand is to
approximate-well-enough, using vastly less computing power than would be
required for the best possible prediction. But there is nothing mysterious
about understanding. There is nothing in "wetness", as you call your cheap
approximation, that is not already present in the water molecules - or, if
there were, it would be an error of your approximation. For the water
molecules do not know that they are "wet".

Philosophers get all weirded out about how water appears to have this
magical additional property of wetness. This error is yet another case of
Jaynes's "Mind Projection Fallacy". Philosophers think as if explanations
were somehow real things, instead of explanations being cheap imperfect
approximations of physics. A human may discover the additional cheap
approximation of wetness, and this will be an additional thought that was
not there before. But the map is not the territory. The water molecules
themselves are just water molecules, and have no additional property of
wetness. The apparent "additional property" exists in your explanation of
the water, playing no role in the physics of the water itself. The thought
of "wetness" exists in you as a thought distinct from your thought about
the low-level physics of water molecules. But water molecules do not have
an additional property of wetness apart from their low-level physics. The
notion of a "holistic" behavior is a property of a map, not a territory.

Marc Geddes wrote:
> No way! This and other such absurd statements on SL4
> makes me think that you and Eliezer have no
> understanding of systems theory or levels of
> explanation.


Ben Goertzel wrote:
>> Yes, it does. If wetness wasn't physically implementable, it would not
>> exist.
> The limitations of this point of view should be obvious to anyone who's
> studied Buddhist psychology or Western phenomenology ;-)
> You say that only the physical world "exists" -- but what is this
> physical world? It's an abstraction that you learned about through
> conversations and textbooks. I.e., it's a structure in your mind that
> you built out of simpler structures in your mind -- simpler structures
> corresponding to sensations coming in through sights, sounds and other
> sensations.
> You can take a point of view that the physical world is primary and your
> mind is just a consequence of the physical domain. Or you can take a
> point of view that mind is primary and the very concept of the physical
> is something you learn and build based on various mental experiences.
> Neither approach is more correct than the other; each approach seems to
> be useful for different purposes. I have found that a simultaneous
> awareness of both views is useful in the context of AI design.

This from someone who accuses *me* of mere philosophy.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT