From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sat Sep 07 2002 - 00:47:38 MDT
Ben Goertzel wrote:
> Eliezer says:
>> Ben, there is a difference between working at a full-time job that
>> happens to (in belief or in fact) benefit the Singularity; and
>> explicitly beginning from the Singularity as a starting point and
>> choosing your actions accordingly, before the fact rather than
> I guess there is a clear *psychological* difference there, but not a
> clear *pragmatic* one.
Psychological differences *make* pragmatic differences. If your life was
more directed by abstract reasoning and less by fleeting subjective
impressions you'd have more experience with the way that philosophical
differences can propagate down to huge differences in action and strategy.
> Consider the case of someone who is working on a project for a while,
> and later realizes that it has the potential to help with the
> Singularity. Suppose they then continue their project with even greater
> enthusiasm because they now see it's broader implications in terms of
> the Singularity. To me, this person is working toward the Singularity
> just as validly as if they had started their project with the
> Singularity in mind.
Yes, well, again, that's because you haven't accumulated any experience
with the intricacies of Singularity strategy and hence have the to-me
bizarre belief that you can take a project invented for other reasons and
nudge it in the direction of a few specific aspects of the Singularity and
end up with something that's as strong and coherent as a project created
from scratch to serve the Singularity. It is possible that you will turn
out to be correct about this - it could be a fact - but from my
perspective it leads to what I see as blatant searing errors on your part.
Of course, if I couldn't be friends with people who make what I see as
blatant searing errors, I wouldn't have any friends.
> The reason I chose to work on AI instead of time travel, unification of
> fundamental physics, genetics or pure mathematics, is that I felt it
> had the potential to bring about this transcendent order of being
I knew about the possibilities of recursively self-improving AI and
enormously faster-than-human AI since age 11. It didn't make much
impression on me except to convince me that the future would be cool in
some vague unspecified way, and influence me to think in terms of a career
in nanotechnology. What changed my life was Vinge's idea of
smarter-than-human intelligence causing a breakdown in our model of the
future, not any of the previous speculations about recursive
self-improvement or faster-than-human thinking, which is why I think that
Vinge hit the nail exactly on the head the first time (impressive, that)
and that Kurzweil, Smart, and others who extrapolate Moore's Law are
missing the whole point.
Again, there's a difference between being *influenced* by a picture of the
future, and making activist choices based on, and solely on, an explicit
ethics and futuristic strategy. This "psychological difference" is
reflected in more complex strategies, the ability to rule out courses of
action that would otherwise be rationalized, a perception of fine
differences... all the things that humans use their intelligence for.
> You may feel that someone who is explicitly working toward the
> Singularity as the *prime supergoal* of all their actions, can be
> trusted more thoroughly to make decisions pertinent toward the
It's not just a question of trust - although you're right, I don't trust
you to make correct choices about when Novamente needs which Friendly AI
features unless the sole and only point of Novamente is as a Singularity
seed. It's a question of professional competence as a Singularity
strategist; a whole area of thought that I don't think you've explored.
> But I am not so sure of this at all. The problem is that
> this kind of extremism in devotion to a cause, throughout human
> history, has often been correlated with poor judgment. This is not to
> say that I *mistrust* someone particularly if they have the Singularity
> (or anything else) as a prime supergoal of their actions, only to say
> that I don't value their judgment particularly because of their extreme
> Personally, although the Singularity (in my own sense of the term) has
> been the main guiding motivation behind my research work and my whole
> career, it is NOT the entire motivation behind my life. It is not the
> supergoal of ALL my actions, only of a majority of my actions. I don't
> improvise at the piano because it is helpful for the Singularity, I do
> it because I enjoy it. I could make a rationalization and say that I
> need to play piano sometimes because it clears my mind and makes me
> more able to work on AI afterwards, but I don't bother to make that
> rationalization. I accept that I have a goal heterarchy in my mind,
> not a goal hierarchy, and that working toward the Singularity is a very
> important goal of the human organism that is me, but not the *prime
Your goal heterarchy has the strange property that one of the goals in it
affects six billion lives, the fate of Earth-originating intelligent life,
and the entire future, while the others do not. Your bizarre attempt to
consider these goals as coequal is the reason that I think you're using
fleeting subjective impressions of importance rather than conscious
consideration of predicted impacts.
I realize that many people on Earth get along just fine using their
built-in subjective impressions to assign relative importance to their
goals, despite the flaws and the inconsistencies and the blatant searing
errors from a normative standpoint; but for someone involved in the
Singularity it is dangerous, and for a would-be constructor of real AI it
> In your definition of whether someone is "working full time on the
> Singularity," you are judging people based on their psychological
> motivations rather than their actions. If you like to categorize
> people in this way, you're welcome to. One problem with this, however,
> is that your own insight into other peoples' psychological motivations
> is rather limited.
This from the person who seems unwilling to believe that real altruists
exist? It's not like I'm the only one, or even a very exceptional one if
we're just going to measure strength of commitment. Go watch the film
"Gandhi" some time and ask yourself about the thousands of people who
followed Gandhi into the line of fire *without* even Gandhi's protection
of celebrity. Now why wouldn't you expect people like that to get
involved with the Singularity? Where *else* would they go?
> I prefer to judge whether someone is working toward the Singularity or
> not by looking at *what they're actually doing*.
> And I *certainly* don't think it's reasonable to implicitly assume that
> only people working for SIAI are truly devoted to the Singularity!
I don't. But it seems reasonable to assume that only people whose day
jobs are explicitly and solely about the Singularity have day jobs
explicitly and solely about the Singularity. There are already people
beyond me and Chris Rovner whose *lives* are explictly and solely about
the Singularity; it's having a second person freed up to *work full-time*
on it that's the exciting part. At least if you're me.
> SIAI does not look to me like a generic Singularity-promoting
> organization. By all appearances it is an organization devoted to your
> particular approach to AI and Friendly AI. Thus, if someone is devoted
> to the Singularity but thinks your approaches to these problems are not
> correct, they are probably not going to join SIAI. Thus, your
> implication that only people involved with SIAI are truly full-time
> devoted to the Singularity, sounds a lot like an implication that only
> people who agree with your ideas are truly devoted to the Singularity.
> I'm sure that's not exactly what you meant to say, but it sure comes
> across that way sometimes.
Well, Ben, this is because there are two groups of people who know damn
well that SIAI is devoted solely, firstly, and only to the Singularity,
and unfortunately you belong to neither.
The first group is composed of people who've been around since I first
showed up on the Extropians list in 1996, and who can attest that I
initially started out by talking about a wide range of possible
Singularity strategies, from human intelligence enhancement via
neurohacking, to collaborative filtering as a means of increasing
civilizationwide intelligence, to AI, before gradually settling on seed AI
as a strategy as it became more and more apparent that this was probably
the fastest way, with the Singularity Institute being founded only after
the writing of a long document that explicitly considered possible paths
to the Singularity and the differential impacts of different Singularity
The second group is composed of Singularity rationalists to whom working
toward the Singularity as the ab-initio strategic goal is
straightforwardly and obviously rational; they have no reason to believe
I'm not capable of doing the same. If I believe that a certain strategy
constitutes the best path toward the Singularity (and there has to be one,
unless you want to spend the rest of your life in wishy-washy armwaving)
then - as this group knows - this is a quite sufficient explanation for my
focusing on that strategy; other explanations for my psychology, such as a
subjective personal attachment to a particular theory of AI, are possible
hypotheses to be considered, but certainly not necessary to the
explanation, since all it takes is supposing that I can follow a simple
chain of logic to its conclusion.
For all that you try to chide me on it, Ben, sometimes it seems to me that
you're the one who has difficulty imagining psychologies unlike his own.
I know that a lot of people don't think the way I do. It so happens
there's a *reason* I think this particular way, and that I think those
other people are *wrong*, but I know that a lot of wrong ways of thinking
exist. Now you can certainly go ahead and tell me that the way I think is
wrong, but I wish you'd accept that the way I think is *different*. I
have no trouble accepting that your way of thinking is different from mine
and no compunctions about labeling it as wrong. You try to avoid labeling
different ways of thinking as "wrong" but the price of doing so appears to
have been that you can no longer really appreciate that wrong ways of
thinking exist. I'm a rational altruist working solely for the
Singularity and that involves major real differences from your way of
thinking. Get over it. If you think my psychology is wrong, say so, but
accept that my mind works differently than yours.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT