RE: How hard a Singularity?

From: Ben Goertzel (
Date: Sat Jun 22 2002 - 21:40:28 MDT

> > Our seed AI is going to get its human-level intelligence, not purely by
> > its own efforts, but largely based on the human-level intelligence of
> > millions of humans working over years/decades/centuries.
> Ah. Well, again we have very different models of intelligence. I don't
> think you can use human knowledge as the mindstuff of an AI.
> I
> don't think
> AI can be built by borrowing human content.

Eliezer, I don't think that "AI can be built by borrowing human content"
either, and you should know that.

The Cyc project comes close to the perspective you cite, but that is not my

I think that the first AGI will be created by

a) engineering a system embodying cognitive, perception and action
mechanisms that are only loosely based on human cognitive science

b) having this system grow from a baby into a useful intelligent mind by a
combination of: autonomous exploration of digital and physical environments,
self-organization, goal-oriented self-modification, and explicit teaching by

In terms of part b), having human knowledge to read, and humans to
communicate with, will be a big help to the system in boosting itself up to
human-level intelligence, and MUCH LESS of a help to the system in boosting
itself up further to superhuman intelligence.

I guess what you're saying is that you think learning from human knowledge,
and explicit education by humans, will not be an important part of getting
an AGI up to human level. I disagree with you on this, but I can't prove
I'm right, of course.

However, just because I think human knowledge and human teaching will be
helpful to an AI in reaching human-level intelligence, does NOT imply that I
want to "build an AI by borrowing human content."

If an AI is not taught by us, and doesn't fill its mind with books we've
written and theorems we've proved, how is it going to get intelligent?
Purely by interacting with the environment on its own? This sounds a lot
less efficient to me... and also a lot less likely to result in a
human-sympathetic AI...

> So you look at pouring human content into an AI, and say, "When we reach
> human-level, we will run out of mindstuff."

No, this is NOT AT ALL what I was saying. Yet again, you are putting words
into my mouth, making it seem as if I were making a much different, and much
weaker, point than I was actually making.

I do NOT envision "pouring human content into an AI", in terms of directly
force-feeding its mind with human knowledge databases a la Cyc. What I
envision is that an AI will turn itself from a baby into a mature mind by
learning from human teachers and by studying human knowledge, including
books and mathematics and software, and perhaps also explicit knowledge DB's
like Cyc. And I believe that this knowledge will accelerate its development
to the roughly human-level, much beyond the pace that would be possible if
all this knowledge and all this teaching were not available.

> And I look at creating AI as
> the task of building more and more of that essential spark that *creates*
> content - with the transfer of any content the AI could not have
> created on
> her own, basically a bootstrap method or side issue

You should know by now that I also view the job of building AI as primarily
a job of creating the right cognitive mechanisms.

Novamente is intended as an autonomous, totally adaptive experiential
learning system, not as a system that confronts the world based on a fixed,
pre-provided set of knowledge.

However, I think that the right set of cognitive mechanisms gives you a
*baby* AI. I don't think that teaching the baby is "basically a bootstrap
method or side issue." (I understand that the word "baby" has some
undesirable anthropomorphic connotations, but the alternative would be to
fabricate a new word for a new Ai with a content-free mind, and I can't
think of a good coinage at the moment!) I think that what the AGI learns
from human knowledge and interactive human teaching, will be just as
important a part of its mind as the cognitive mechanisms that are initially
put in.

> "When the AI
> reaches human level, she will be able to swallow the thoughts
> that went into
> her own creation; she will be able to improve her own spark, recursively."

And I agree with that -- the question is *how fast* will the AI be able to
improve itself.

It's a quantitative question. Your intuitive estimate is much faster than

> An AI will have much to learn from human mind-content, but by the time
> reaching human-level is anything like an issue, the most
> important part of
> what she knows will belong to her; it won't be borrowed from humans.

I doubt that very much. I think that a key part of getting a baby AI to
useful human-adult-level intelligence will be imbibing human patterns of
thought and interaction -- learning from people via reading text and
databases, and via interaction. Hence I think the first AGI, even if built
on radically nonhuman data structures and dynamics, will have a lot of
humanity in its emergent mind-patterns at first...

I think you place way too much faith in "bootstrapping" style
self-organization. Creating a smart system that can modify its own code,
and giving it good perceptors and actuators, will lead to a mature, usefully
self-improving and world-understanding AGI *eventually*, but how long will
it take? I think the process will go faster by far if teaching by humans is
a big part of the process. I also think that a human-friendly AGI is more
likely to result if the system achieves its intelligence partly thru being
taught by humans.

You may say that with good enough learning methods, no teaching is
necessary. Maybe so. I know you think Novamente's learning methods are too
weak, though you have not explained why to me in detail, nor have you
proposed any concrete alternatives. However, I think that *culture and
social interaction* help us humans to grow from babies into mature adult
minds in spite of the weaknesses of our learning methods, and I think that
these same things can probably help a baby AGI to grow from a piece of
software into a mature AGI capable of directing its activities in a useful
way and solving hard problems.

You may say that I'm just anthropomorphizing here, but I don't think so.
Clearly teaching an AGI will be very different from teaching a human. But
it seems so dumb not to give our early-stage would-be AGI the benefit of
human knowledge and intuition, which is considerable, though flawed. And if
we do get it started with our teaching & our knowledge, then when it
outstrips us, it will face a new set of challenges. I'm sure it will be
able to meet these challenges, but how fast? I don't know, and neither do

-- Ben G

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT