From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun May 05 2002 - 21:38:04 MDT
Ben Goertzel wrote:
>
> I would respect your opinion more if you had personally taken on the
> challenge of designing a "real AI" system. I understand you intend to do
> this sometime in the future. I suspect that once you have done so, we will
> be able to have much more productive conversations. I think it will be
> easier to map various Novamente ideas into aspects of your detailed AI
> design, than it is to map them into aspects of your abstract theory.
Ben, sometimes writing code is taking the easy way out. I understand that
you believe resources should be put into Novamente, rather than, say, SIAI,
and that's certainly not against the law. But with all due respect,
Novamente seems to be constructed out of ideas that I had at one point or
another, but which I looked at and said: "No, it's not that easy. This
problem is harder than that - this method will work for small problems, but
not for big problems; it's not good enough for real AI." To me it looks
like Novamente is going to try for real AI and go splat. It's just not that
powerful and it doesn't look like the framework of something that can be
rebuilt into something that powerful. If I wanted to fling myself against
the problem and go splat, I could have done that anytime after 1996. You
are welcome to believe that the problem of creating true intelligence is
enormously smaller than I think it is, and that enormously less complexity
is needed to handle it, in which case I'm sure it makes sense for you to
criticize me on the grounds of not having flung myself at the problem yet.
>From my perspective, it is very easy and tempting to start implementing an
inadequate design, but futile.
You have been known, from time to time, to remark on my youth and my not
having running AI code, which I consider to be "cheap shots" (i.e., taking
the easy way out), so let me take what I fully acknowledge to be a cheap
shot, and ask whether either Novamente or Webmind have done anything really
impressive in the realm of AI? If you have so much more experience than I,
then can you share the experiences that lead you to believe Novamente is a
design for a general intelligence, rather than (as it seems to me) a
pattern-recognition system that may be capable of achieving limited goals in
a very small class of patterns that are tractable for it?
I've given Novamente a lot of benefit of the doubt in the past, but after
reading your manuscript and finding out that most of my past temporary
credit was mistaken, I can't credit your "experience" with Webmind, or the
vast additional amount of design no hint of which appeared in the Novamente
manuscript, until you tell me what *specifically* your experiences or extra
design are. You talk about Novamente being able to prove mathematical
theorems once the Mizar database is translated into Novamente propositions,
and about Novamente being able to make design improvements to itself once it
has the logic of a Java supercompiler. I just can't see this as reasonable,
even taking different intuitions about AI into account. It looks to me like
another AI-go-splat debacle in the making.
Why do it? Why make all these lofty predictions? When SIAI starts its own
AI project, we aren't going to be telling people we'll have a [whatever] in
[whenever]. All we'll guarantee is the *attempt* at seed AI because seed AI
is an extremely hard problem. Why do so many AI projects violate this basic
rule? Does AI really look that easy to them? Do you have to enormously
exaggerate the promise of your system just to get it funded? Is there a
selection effect making sure that only people who underestimate AI even make
the attempt, while everyone who sees the real size of the problem goes into
a more tractable area of cognitive science? Ifni knows I'd never have stuck
with the problem past 1998 (when I thought real AI would take a Manhattan
project) if the fate of the entire human species hadn't been at stake. Am I
the first pessimist ever to go into real AI in the first place? Why are
people *still* making cheery, optimistic predictions about insanely hard
problems? If people were still trying to solve the problem, I could
understand that, but what's with the cheery optimism?
Right now it looks to me like, in another few years, I'm going to be dealing
with people asking: "Yeah, well, what happened to the Novamente project
that promised us transhuman seed AI, and (didn't pan out) / (turned out to
be just a data-mining system)?" And I'm going to wearily say, "I predicted
in advance that would happen, and that in fact I would end up answering this
very question; here, let me show you the message in the SL4 archives."
You keep saying that I ought to just throw myself into design, as if it were
an ordinary problem of above-average difficulty, rather than a critical step
along the pathway of one of the ultimate challenges. In the first chapter
of your manuscript you casually toss around the terms "seed AI" and
"transhuman intelligence" as if they were marketing buzzwords. You don't
present it as a climax of a long, careful argument; you just toss it in with
no advance justification. It's like you first claimed that Novamente could
do general intelligence because that was the most impressive thing you'd
heard of, and once you heard about the Singularity you decided to add that
as a claim too. It's very easy for you to claim that Novamente is a design
for a real AI. Lenat can claim that Cyc is a design for a real AI. Newell
and Simon can claim that GPS is a design for a real AI. It doesn't mean
that you've gotten started coding a seed AI and I haven't. It means that
you have a much lower threshold for accepting what looks to you like a
probable solution. To me it looks like I'm being penalized for admitting
the real difficulty of the problem instead of using a quick, easy, and
unworkable solution. And I'll admit I'm annoyed, and I'm even more annoyed
that you're using the term "seed AI", because I don't want seed AI plagued
by the cloud of failure created when a bunch of projects cheerfully fling
themselves against the wall and go splat. But, as you say, you don't need
my permission. Fine; I'll do my best to clean up the mess afterward. But
it still seems to me that it would be very easy to avoid the entire debacle
just by changing the way you talk about the problem.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT