Re: An essay I just wrote on the Singularity.

From: Brian Atkins (
Date: Wed Dec 31 2003 - 14:24:31 MST

Robin Lee Powell wrote:
> On Wed, Dec 31, 2003 at 09:40:08AM -0800, Michael Anissimov wrote:
>>1. Why do you call Singularitarianism your "new religion"? I
>>know it's basically all in jest,
> No, it's not.
> I define religion as a set of beliefs held in absence of evidence or
> proof. Singularitarianism fits perfectly, regardless of how much I
> might wish otherwise.
>>but thousands of people have already misinterpreted the
>>Singularity as a result of the "Singularity = Rapture" meme,
> Yeah, I really haven't found any way to present the ideas without
> invoking that comparison. However, bear in mind that the reason
> everyone makes that comparison is because it's a perfectly *valid*
> one. Seriously. The only substantial difference between the
> singularity and the rapture is that no-one involved in the
> singularity claims to have had a vision from god.

You might enjoy this lengthy wta-talk post from approx 1 year ago:

-------- Original Message --------
Subject: Re: [wta-talk] Pragmatism Against Faith [was]: Singularity Issues
Date: Mon, 30 Dec 2002 00:49:12 -0500
From: Eliezer S. Yudkowsky <>
References: <Pine.GSO.4.44.0212291630070.21869-100000@socrates.Berkeley.EDU>

dalec@socrates.Berkeley.EDU wrote:
> Pragmatism doesn't say that the material world is voluntary,
> illusory, mental, or linguistic (even in extreme formulations like
> Sellars's who insisted that all awareness is a linguistic affair), nor do
> its post-structuralist cousins (Judith Butler is being falsely accused of
> this all the time). I am just saying that even when we have good reason
> to believe in the truth of a description or account we have no reason to
> believe we have found the description that the world would prefer to be
> described in, the final description, the description for the Ages, the
> description that is unassailable, the description that gives to those who
> hold it over those who do not a binding authority.

Why would *any* description, even a correct one, even a knowably correct
one, give those who hold it any kind of binding authority over anyone?
This sounds like a strawman argument to me.

I acknowledge that a final reality exists. I don't claim to have a
perfect description of it. I've written extensively about exactly that
critical distinction, as some of the people on this list may recall.

What I want is to keep mental track, at all times, between the distinction
between the map and the territory; and I don't think you're doing that.
It was the territory of aerodynamics, not anyone's map of aerodynamics,
that determined that the Wright Brothers' plane would actually fly. Just
because our knowledge of aerodynamics is, and forever will be, a map,
doesn't mean that it wasn't the territory of aerodynamics that lifted the

Yes, when I talk about the "territory of aerodynamics", this statement is
itself a map; so is my statement that, e.g., "contrary to a widespread
misapprehension, lift by airplane wings is *not* generated by the
Bernoulli Principle". Maps, by default, are about the territory. If, on
the other hand, you want to talk *about* a map, you need a metamap that
treats the first map as a territory. This is a difficult distinction to
keep track of; you can find some discussion of it in the section on
external reference semantics in _Creating Friendly AI_.

When I talk about the Singularity, I am using a simple map in which I talk
about recursive self-improvement as opposed to evolution, indefinitely
extensible hardware as opposed to hardware-bounded brains, and so on.
You, on the other hand, complain about the map of the Singularity having a
supposed similarity to the map of religion. In so doing, you are using a
metamap, a map of a map, and you are failing to track that distinction.
You seem to think that your arguments about supposed resemblances between
the Singularity map and the religious map (metamap) can be mentioned in
the same breath as arguments about improvement rates and development
trajectories in recursively self-modifying AIs (simple map).

Some of us are interested in a special property that maps sometimes have;
this property is correspondence to the territory. We have no direct
access to the territory; if we did, we could determine the correspondence
by direct comparison. But we still know the abstract fact that the
territory exists, and the abstract fact that the correspondence between
any given map and territory will be imperfect. The imperfections where a
map fails to match the territory, for obvious reasons, are never part of
the map itself; but the abstract fact that imperfections probably exist
can be part of the map. From this abstract knowledge comes the idea of
improving a map so that it corresponds more closely to the territory.

Working with the abstract knowledge that the map and the territory differ,
it is possible to try and figure out rules to follow for bringing the map
into closer correspondence with the territory. One such set of rules is
called science.

The reason why scientists have power, why humans have walked on the moon,
why witch doctors are less effective than surgeons, why a digital watch
works whether we believe in it or not, is that the scientific map has a
better correspondence to the territory. Does this consist of the
scientific map containing the metamap statement, "this map corresponds
more closely to the territory"? No, of course not. Any map might contain
that statement. The correspondence between map and territory is not a
part of the map. It is a part of the territory. Like the rest of the
territory, this correspondence is not directly accessible; we can make
maps of the correspondence but they're just maps. But it is nonetheless
this real correspondence that determines whether a theory works, just as
it is the territory of aerodynamics and not anyone's map of it that keeps
a 747 in the air.

Sometimes theories *really are* correct, even though they may not be
*knowably* correct - the correspondence is a part of the territory, and
independent of any maps which may be made of that correspondence, which is
why a digital watch works whether you believe in it or not. Our abstract
knowledge that sometimes theories really are correct is what explains our
observation that sometimes theories actually work. This is in no way
contradicted by our abstract knowledge that even theories which really are
correct will not be *knowably* correct. That a theory is sometimes
actually correct is one abstract fact about the territory and the mapping
process; that even a really correct theory is still not *knowably* correct
is a *separate* abstract fact about the mapping process.

My confidence that 2 + 2 = 4 may be 99% rather than 100%, but this doesn't
mean that 2 + 2 = 4 only 99 out of 100 times. That *I* am uncertain does
not mean *reality* is uncertain. It is a self-consistent account of the
history of our universe to describe it as starting with the eternally
absolutely correct unalterable fact that 2 + 2 = 4, and going from there
to the eventual physical development of a piece of matter called Eliezer
Yudkowsky whose brain contains a map of mathematics in which the statement
"2 + 2 = 4" is held with 99% confidence but can never rationally be held
with 100% confidence. In fact, it is this very hypothesis that my 99%
confidence attaches to.

Physics existed before people, and any self-consistent account of the
universe needs to describe how people arise from physics, rather than the

> At the beginning of this post I re-quoted the challenge which I will
> repeat again. Tell me what the criteria are on the basis of which you
> distinguish the descriptions which are affirmed as the best on offer by
> respectable contemporary science, from the super descriptions of "actual
> physics," with the special unassailable dignity and authority you seem
> to attach to these.

The key distinction: the territory of physics, and not our map of it, is
the causal agent that determines experimental results. Where our map
fails to correspond to the territory, observed experimental results differ
from what our map says they should be. By then adjusting the map, it can
be brought into closer correspondence with the territory. If you have the
abstract knowledge that the map differs from the territory somewhere, you
can deliberately perform experiments to uncover those failures of
correspondence and correct them. Hence Neil Armstrong.

> Yes, descriptions, like tables, bits of turf, electrochemical
> of neural-nets, and such are all in causal relations to one another. Do
> you think my pragmatist account of knowledge is *less* causal than yours?
> Why so? I suspect that my account of epistemology is *indistinguishable*
> from yours apart from the weird moments when you start imagining a
> Universe that has preferences in the matter of how it should be described
> by humans, and in the quaint moments where you move from describing the
> reasons for holding your beliefs and begin to congratulate them in
> addition as "accurately modeling," "representing," "mirroring," or
> "picturing" the world, all of which looks like non-causal decoration with
> little force beyond the difference in saying you believe something and
> saying you really really really believe something.

If you forbid the discussion of the abstract fact that the territory
exists, on the mere grounds that any discussion of specific territory must
use a map, then there is no way to explain the greater observed efficacy
of science as compared to magic.

People like me have the idea that the *purpose* of a map, its reason for
existence, is the territory. This is why we object to, for example, your
assumption that any talk of a hard takeoff *must* be bad because it can,
given a sufficiently vague understanding (sorry!) and a lot of loose
analogizing, be made to look vaguely similar to at least one map produced
at some time by the vast and incoherent engine of human mythology. From
my perspective, a map that includes a hard takeoff is a bad thing if a
hard takeoff fails to materialize, and a map that includes a hard takeoff
is a good thing if a hard takeoff actually happens. Again, the fact that
we have no direct access to this level of reality does not mean it does
not exist; it is, for example, what would be expected to determine whether
our subjective experience of watching an AI mature ever includes the
subjective experience of watching the AI undergo a hard takeoff.

*If* you were to show that AI undergoing a hard takeoff is something we
should estimate as extremely unlikely given the rational/scientific rules
for determining how well a prospective map is likely to correspond to
territory, you would *then* be licensed to derisively compare my map of AI
to various noncorrespondent mythological maps, hypothesizing that there
was a common cause of failure in both cases. But until then, the simple
map of "an AI will probably undergo a hard takeoff, and this is why" is
more important to me than any metamap comparison between that simple map
and various mythological maps; a territorially correct map is worthwhile
in itself, even if it is not politically correct.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence
Brian Atkins
Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT