RE: Novamente project goals

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Mar 18 2002 - 19:03:36 MST


> It so happens I believe that no matter what a project *says* it's
> trying to
> achieve, you can often figure out what it's *actually* trying to
> achieve by
> looking at the way the researchers act. Various projects have previously
> claimed to be trying to build an AI that was human-equivalent in
> one some or
> another. Were they really trying to build a person? Of course not. They
> were trying (and failing) to build advanced tools. Had they really been
> trying to build a person, it would have shown in their attitude;
> they would
> have given some thought to whether the resulting system would be deserving
> of human rights, their responsibilities toward the created individual, and
> so on.

I think you are fairly wrong in your analysis of past AI researchers'
attitudes.

I think that there have been some teams in the past who have

a) seriously set out to build a real, human-equivalent general intelligence

b) NOT published about their ideas on ethics, responsibilities, etc.

You must understand that the "futurist ethics" ideas that you habitually
write about, are *not* the kinds of things that academic scientists are
encouraged to publish their views on. The tradition is to publish your
technical results, and save your speculations for coffeetable discussion.

The Web has changed this a little bit -- now, via the Net, researchers can
distribute their philosophical thoughts widely with relatively little
effort. But this hasn't really changed the culture of the technical
community, which includes the general belief that time spent writing about
philosophical nontechnical ideas is mostly wasted time. In fact is often
considered worse than wasted time: writing about such things can be career
suicide, for individuals who need to earn a living via their academic
reputations. Try getting tenure with a bunch of publications about the
human rights of future AI's on your resume'. Not easy.

I think if you talked to the actual researchers involved in past (failed) AI
projects, you'd find that they had personally thought about a lot of the
issues you mention. This has been my experience.

> Now, it certainly appears that Novamente has gotten this far in terms of
> feeling the emotional impact of envisioned consequences. Your dedication,
> your willingness to work on the AI even after Webmind went down, shows the
> same thing. Whatever it is you're trying to create, it doesn't
> feel like a
> fancy tool to you. I don't know what goes on within the Secret Private
> Novamente Mailing Lists, but I wouldn't be surprised to find that serious
> consideration of the moral responsibility you owe to the AI is a frequent
> topic.

Actually, we beat that topic into the ground in 1999 or so on the Webmind
Inc. tech list. It pretty much never comes up these days. We ran out of
new things to say about it. The Novamente internal mailing lists are almost
entirely technical, with occasional postings of links to interesting
research papers etc. Philosophical discussions more often occur when we
meet together in person (not that often, just a couple times a year, as I
live in the US and much of the team is in Brazil).

> But is your attitude toward Novamente really consistent with trying to
> create a superintelligence?

Eliezer, Novamente is a big enough project that it must be broken down into
phases. We think in terms of

Phase 1: getting Novamente to work as a highly flexible, robust, scalable,
cognition-based multi-domain datamining engine

This requires most of the cognition algorithms, and some perception
algorithms... and of course the "core framework"

Phase 2: putting in the nastiest parts of cognition and action, and doing
some basic language processing and planning tests with them

There are a couple very hard problems (not so hard to code, but requiring
loads of computer memory and processing to run) which we will defer dealing
with till everything else is coded

Phase 3: completing the goals, feelings and experiential-interaction
framework. Very little additional code here, a lot of additional testing...
and teaching. This is where we try to teach the system by interacting with
it in a shared perceptual environment.

Phase 4: making a very efficient procedure execution framework in the
system. This will enable us to recode the system's thought algorithms as
nodes and links in the system itself -- permitting the system to achieve
full "cognitive transparency" -- perceiving and modifying its own thought
algorithms.

The design goes thru Phase 4, but we're currently somewhere around halfway
through Phase 1. Phase 4 is where I believe we'll see superintelligence
emerge -- how fast I'm not sure. We may need Phase 4 to get roughly
human-level intelligence: some of the team thinks yes, some think it can
emerge in the middle of Phase 3.

We are building the system with all 4 phases in mind, from the start.

> You're uncomfortable with my declaration of
> intent to bypass having an ordinary life, because to you Novamente may be
> the greatest thing you ever do, but it won't be the only thing
> you ever do.
> You can have a life that includes wife, kids, and Novamente as
> accomplishments. For me the Singularity marks, not the end of everything,
> but the beginning of everything. It is the sum of what there is
> to do, here
> on Earth before the Singularity.

The Singularity will be neither the end of everything nor the beginning of
everything, actually!

And it is just NOT the sum of everything there is to do pre-Singularity. Do
you hang glide? I'll take you up sometime, maybe it will change your mind
;> Not as distracting as a sex life, but almost as exciting... and these
days, possibly safer ;-D

> Your attitude toward Novamente's outlook on life seems to be
> around the attitude I'd take toward building an AI that *wasn't*
> supposed to
> grow up into the Singularity Transition Guide.

Well, fine. But nevertheless, my attitude toward Novamente is the attitude
that *I* take toward building an AI intended to set off the Singularity.
You and I are rather different human beings, as has been exhaustively
pointed out on this list!!

I am not generally known as a humble person. However, I find I am a bit
humbler than you in terms of estimating any human's ability to *predict*
what a superhuman AI will be like, and to guide in detail what it evolves
into.

> If you're working
> on a superintelligence, though, the only problem with taking an
> oath is that
> any oath pales by comparison with the act itself. You don't sound like
> someone setting out to commit an act with (positive) consequences so
> tremendous that anyone setting a single dividing marker across
> all of human
> history throughout time would set it there, and not because they
> think Real
> AI is a philosophically important moment in human history, either. You
> appear, from what I can see through email, to consider such statements
> over-the-top. Implications that extend out from a scientifically
> active, ~H
> Novamente are okay; implications that extend out from
> superintelligence are
> not. Not just in terms of what you consider to be good public relations,
> which is a separate issue, but in terms of what you, personally, are
> comfortable with discussing.

Eli, obviously I am *comfortable* with discussing such ideas; we have
discussed them at great length on this list. If I were uncomfortable about
it I wouldn't take the time to write long replies to your e-mails on the
topic!!!

I do get *bored* with discussing such things after a while, though, because
there seems to be a very limited amount that can usefully be said on such
topics, given the data at hand today. This is the reason we stopped
discussing AI morality on internal Webmind Inc. lists after a while.

> In short, everything about your emotional posture that I can read through
> email says that you're making decisions based on your vision of a ~H
> Novamente - not a superintelligent one.

Well, your reading of my "emotional posture" is not very good -- which is
not surprising, because we don't know each other well on a personal level,
and my emotional makeup has been shaped by a huge number of experiences very
different than any experiences you've ever had.

> Now, it is well-known that figuring out people's real thoughts
> and emotions
> through email is an underconstrained problem. I'm not trying to
> pigeonhole
> you. Just consider this as depicting the causes and conclusions of my
> erroneous intuition in sufficient detail that you can fix what's broken.

I think that figuring out other peoples' real thoughts and emotions even IN
PERSON is a difficult problem. I'm not all that socially retarded, yet I
have trouble sometimes reading my WIFE AND KIDS' thoughts and emotions,
goodness!!

I think you should accept that others can be equally as serious about
building a superintelligent AI as you are, but take a DIFFERENT
PHILOSOPHICAL ATTITUDE than you do.

My philosophical attitude has, as one implication, that at this stage it's
not really worth thinking or talking *too much* about AI morality. It's
something to keep in mind as one progresses with one's work, but it's not a
central point until one has a system that one is actually talking with and
teaching. At that stage, I suspect various aspects about AI morality will
be a lot clearer than they are now. Having a real AI system to play with,
demonstrating morality and immorality on a baby-ish level, will add a hell
of a lot of new data and new conceptual richness to this sort of discussion.

I don't think we can figure all that much about AI morality at this stage.
Not even for Novamente, let alone for your AI project which is much less far
along.

You may say: "Yes, but what if the AI system goes into a hard takeoff while
you're playing with it and getting a feel for the AI morality issues."

And I say: In the case of Novamente, I have a really good sense of the stage
at which a hard takeoff will be possible. (Most ambitiously, somewhere in
the middle of my Phase 3 above -- to explain more specifically would need
too many technical details). The time to worry a lot about morality is when
this stage is much closer. Only then will the detailed knowledge be there
to support really meaningful intuitions about AI morality. Maybe we'll be
there in a year and a half (if we get some more funding soon), or maybe
it'll take 5 years (if annoying technical obstacles intervene, or we go
totally broke).

I feel that you spend a lot of time building conceptual castles in the sand.
Formulating ideas about AI morality that are intriguing and conceptually
reasonable, but very far beyond anyone's detailed knowledge about AI mind
dynamics.

Much of your Friendly AI theory feels to me like Eric Drexler's detailed
engineering designs in his book Nanosystems. Interesting to read, and worth
working out for conceptual purposes, but *too far ahead* of current
practical technology to be expected to be accurate at even a moderate level
of detail. His concept of a molecular assembler will happen, but his
particular constructions in terms of molecular rods and so forth almost
certainly won't -- nanotech is already moving in different directions.
Similarly, Friendly AI will happen, but your particular approach in terms of
hierarchical ("acyclic digraph", whatever) goal systems will probably seem
as naive in 5 years when we have human-level AI, as Drexler's designs do to
contemporary nanoengineers.

-- Ben G

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT