RE: Activism vs. Futurism

From: Ben Goertzel (
Date: Sat Sep 07 2002 - 09:20:59 MDT


> > Eliezer says:
> >
> >> Ben, there is a difference between working at a full-time job that
> >> happens to (in belief or in fact) benefit the Singularity; and
> >> explicitly beginning from the Singularity as a starting point and
> >> choosing your actions accordingly, before the fact rather than
> >> afterward.
> >
> > I guess there is a clear *psychological* difference there, but not a
> > clear *pragmatic* one.
> Psychological differences *make* pragmatic differences. If your life was
> more directed by abstract reasoning and less by fleeting subjective
> impressions

You're quite presumptive, and quite incorrect, about my own life and
psychology, Eliezer!

I don't know where you got the idea that my life is substantially driven by
"fleeting subjective impressions" ????

That seems like a very strange claim to me, and would probably seem strange
to anyone who knew me well.

For one thing, I've been going in basically the same direction with my life
since age 15 or so -- so whatever has been governing my life has hardly been
"fleeting" on the time scale of a human life (as opposed to, say, the
geological or astronomical time scales, on which ALL our lives are fleeting

> you'd have more experience with the way that philosophical
> differences can propagate down to huge differences in action and strategy.

Of course philosophical differences imply differences in actions and
strategies. But they do so in VERY complex ways, not in obvious and simple

For example, among my father's friends in an earlier stage of his life, I
knew many hard-line Marxists who sincerely believed the US government was
evil and had to be overthrown. This came out of deeply-held philosophy on
their part. Some of these people actually joined revolutionary efforts in
other countries; others stayed home and wrote papers. One guy shot himself
partly because the new US revolution seemed so far off. The same philosophy
led to many different actions.

> > Consider the case of someone who is working on a project for a while,
> > and later realizes that it has the potential to help with the
> > Singularity. Suppose they then continue their project with even greater
> > enthusiasm because they now see it's broader implications in terms of
> > the Singularity. To me, this person is working toward the Singularity
> > just as validly as if they had started their project with the
> > Singularity in mind.
> Yes, well, again, that's because you haven't accumulated any experience
> with the intricacies of Singularity strategy

You seem to believe that only you, and those who agree with you, have any
understanding of "Singularity strategy"

To you, it seems, Kurzweil has lost the Singularity... I just don't get it
either... only a tiny handful of people see the light (i.e. think about the
Singularity close enough to the exact same way you do)

It seems to me that there are many different ways of looking at the
Singularity and working toward it, and that with the current state of
knowledge, we really don't know whose view is correct.

How do you explain the fact that

a) you have written your views on the Singularity down
b) Kurzweil and I both are highly intelligent and know a lot about the
Singularity and are aware of your views [I don't know about Ray; I've read
them in detail]
c) Neither of us agrees with you in detail

Do you explain it by

1) saying that we're being irrational and you're being rational?


2)admitting that you aren't able to make a convincing argument due to the
limited knowledge that your ideas are based upon, and the fact they they're
fundamentally based on some intuitive leaps.

If 2), then how can you say with such confidence that only people who agree
closely with you have any understanding of Singularity strategy?

If 1), then I think you're deluding yourself, of course...

> and hence have the to-me
> bizarre belief that you can take a project invented for other reasons and
> nudge it in the direction of a few specific aspects of the
> Singularity and
> end up with something that's as strong and coherent as a project created
> from scratch to serve the Singularity.

Frankly, my own feeling is that my own project is significantly "stronger"
than the SIAI project. However, I realize that I have a bias here, and that
my judgment here is based partly on intuitions; so I don't believe others
are irrational if they disagree with me!

I don't really believe your approach to Friendly AI will work (based on what
I've read about it so far, for reasons that have been much discussed on this
list), and I haven't seen any algorithm-level details about your approach to
AI in general. So I have no reason, at this stage, to consider SIAI's work

I agree that SIAI's (i.e. your) writings are *coherent*, in the sense that
they all present a common and reasonably comprehensible point of view. But
it's reasonably easy to do that in conceptual and even semi-technical
writings. Marxist political writings and libertarian political writings are
each very coherent, in themselves, yet they can't both be correct!

> Of course, if I couldn't be friends with people who make what I see as
> blatant searing errors, I wouldn't have any friends.

Hmmmm... I'm not going to follow this one up ;)

> What changed my life was Vinge's idea of
> smarter-than-human intelligence causing a breakdown in our model of the
> future, not any of the previous speculations about recursive
> self-improvement or faster-than-human thinking, which is why I think that
> Vinge hit the nail exactly on the head the first time (impressive, that)
> and that Kurzweil, Smart, and others who extrapolate Moore's Law are
> missing the whole point.

I don't think it's fair to say that Kurzweil, Smart and others are "missing
the whole point."

I think they are seeing a slightly different point than you are (or I am),
and I think it's reasonably possible one of them will turn out to be righter
than you (or me).

I disagree with them, but they're smart people and I can't convince them I'm
right, because our disagreements are based on intuitions rather than
demonstrated facts.

Kurzweil believes that real AI will almost surely come about only thru
simulation of human brains, and that increase in intelligence of these
simulated human brains will be moderately but not extremely fast.

I disagree with him on these points, but he's not "missing the whole point"
about the Singularity.

George W. Bush, for example, is missing the whole point about the

Although I believe AGI is achievable within this decade, and that
intelligence acceleration will be fact after human-level AGI is reached, I
don't believe these conclusions are *obvious*, and I don't expect to be able
to convince people with *differently-oriented intuitions* of these things
until after the AGI is achieved. Fortunately I have found some others with
similarly-oriented intuitions to work on the project with me.

> Again, there's a difference between being *influenced* by a
> picture of the
> future, and making activist choices based on, and solely on, an explicit
> ethics and futuristic strategy. This "psychological difference" is
> reflected in more complex strategies, the ability to rule out courses of
> action that would otherwise be rationalized, a perception of fine
> differences... all the things that humans use their intelligence for.

Perhaps so.... However, I don't see this incredibly superior complexity of
strategy and fineness of perception in your writings or SIAI's actions so
far. I'll be waiting with bated breath ;)

> > You may feel that someone who is explicitly working toward the
> > Singularity as the *prime supergoal* of all their actions, can be
> > trusted more thoroughly to make decisions pertinent toward the
> > Singularity.
> It's not just a question of trust - although you're right, I don't trust
> you to make correct choices about when Novamente needs which Friendly AI
> features unless the sole and only point of Novamente is as a Singularity
> seed.

Of course the sole and only *long term* point of Novamente is as a
Singularity seed.

We are using interim versions of Novamente for commercial purposes (e.g.
bioinformatics), but under licensing agreements that leave us (the core team
of developers) full ownership of the codebase and full rights to use it for
research purposes.

You may find this impure, but hey, we don't have a patron like SIAI does.

We are intentionally structuring our commercial pursuits in such a way that
will not interfere with our long term humanitarian/transhumanitarian goals
for the system. This is actually a pain in terms of legal paperwork, but is
a necessity because we ARE developing the system with these long term goals
in mind.

> It's a question of professional competence as a Singularity
> strategist; a whole area of thought that I don't think you've explored.

You don't think I have explored it ... because I have made the choice not to
spend my time writing about it.

Actually, many people have thought a great deal about Singularity strategy
with out taking their time to write a lot about it, for various reasons --
lack of enthusiasm for writing, or else (as in my case) not seeing a purpose
in doing writing on the topic at present.

The time for me to write down my view on Singularity strategy will be when
Novamente approaches human-level AGI. Furthermore, my thoughts will be more
valuable at that time due to the added insight obtained from experimenting
with near human level AI's.

> Your goal heterarchy has the strange property that one of the goals in it
> affects six billion lives, the fate of Earth-originating
> intelligent life,
> and the entire future, while the others do not. Your bizarre attempt to
> consider these goals as coequal is the reason that I think you're using
> fleeting subjective impressions of importance rather than conscious
> consideration of predicted impacts.

I tend look at things from more than one perspective.

>From a larger perspective, of course, working toward the Singularity is
tremendously more important than the goal of entertaining myself or taking
care of my kids....

>From my own perspective as an individual human, all these things are
important. That's just the way it is. So I make a balance.

You also ignore the fact that there are differing degrees of certainty
attached to these different goals.

I.e., whether my work will affect the Singularity is uncertain -- and it's
also possible that my (or your) work will affect it *badly* even though I
think it will affect it well...

Whereas my shorter-term goals are rather more tangible, concrete, and easier
to estimate about...

> I realize that many people on Earth get along just fine using their
> built-in subjective impressions to assign relative importance to their
> goals, despite the flaws and the inconsistencies and the blatant searing
> errors from a normative standpoint; but for someone involved in the
> Singularity it is dangerous, and for a would-be constructor of real AI it
> is absurd.

It's not really absurd. I'm a human being, and working toward creating real
AI is one aspect of my life. It's a very, very important aspect, but still
it's not my ENTIRE life.

This afternoon, I'm not deciding whether to go outside and play with my kids
this afternoon, or stay here at the computer, based on a rational
calculation. I'm deciding it based on my mixed-up human judgment, which
incorporates the fact that it'll be fun to go outside and play, that it's
good for the kids to get time with their dad, etc.

I don't think I need to govern the details of my day to day life based on
rational calculation in order to be rational about designing AGI.

> This from the person who seems unwilling to believe that real altruists
> exist? It's not like I'm the only one, or even a very exceptional one if
> we're just going to measure strength of commitment. Go watch the film
> "Gandhi" some time and ask yourself about the thousands of people who
> followed Gandhi into the line of fire *without* even Gandhi's protection
> of celebrity. Now why wouldn't you expect people like that to get
> involved with the Singularity? Where *else* would they go?

I'm not sure what you mean exactly by comparing yourself to Gandhi?

He was a great man, but not a perfect man -- he made some bad errors in
judgment, and it's clear from his biography that he was motivated by plenty
of his own psychological demons as well as by altruistic feelings.

Of course many people will follow inspirational leaders who hold extreme
points of view. That doesn't mean that I will agree these leaders have good

> Well, Ben, this is because there are two groups of people who know damn
> well that SIAI is devoted solely, firstly, and only to the Singularity,
> and unfortunately you belong to neither.

I understand that the *idea* of SIAI is to promote the Singularity

However, the *practice* of SIAI, so far, seems very narrowly tied to your
own perspectives on all Singularity-related issues.

May I ask, what does SIAI plan to do to promote alternatives to your
approach to AGI? If it gets a lot of funds, will it split the money among
different AGI projects, or will it put them all into your own AGI project?

What does it plan to do to promote alternatives to your own speculative
theory on Friendly AI? (And I think all theories on Friendly AI are
speculative at this point, not just yours.)

When I see the SIAI website posting some of the views on Friendly AI that
explicitly contradict your own, I'll start to feel more like it's a generic
Singularity-promoting organization.

When I see the SIAI funding people whose views explicitly contradict your
own, then I'll really believe SIAI is what you say it is.

I must note that my own organizations are NOT generic in nature. My "Real
AI Institute" is devoted specifically to developing the Novamente AI Engine
for research purposes, and makes no pretensions of genericity.

I do think there is a role for a generic Singularity-promoting organization,
but I think it would be best if this organization were not tied to anyone's
particular views on AI or Friendliness or other specific topics -- not mine,
not yours, not Kurzweil's, etc.

> You try to avoid
> labeling
> different ways of thinking as "wrong" but the price of doing so
> appears to
> have been that you can no longer really appreciate that wrong ways of
> thinking exist. I'm a rational altruist working solely for the
> Singularity and that involves major real differences from your way of
> thinking. Get over it. If you think my psychology is wrong, say so, but
> accept that my mind works differently than yours.

I do think wrong ways of thinking exist. I think that Hitler had a wrong
way of thinking, and I think that George W. Bush has a wrong way of
thinking, and I think that nearly all religious people have wrong ways of

I don't know if you have a wrong way of thinking or not. However, I worry
sometimes that you may have a psychologically unhealthy way of thinking. I
hope not...

-- Ben G

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT