From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 23 2002 - 04:37:03 MDT
Ben Goertzel wrote:
>
> Eliezer, I don't think that "AI can be built by borrowing human content"
> either, and you should know that.
>
> The Cyc project comes close to the perspective you cite, but that is not my
> project.
Perhaps. Nonetheless there is more Cycish stuff in Novamente than I am
comfortable with. Novamente does contain structures along the lines of
eat(cat, mice). I realize you insist that these are not the only structures
and that the eventual real version of Novamente will replace these small
structures with big emergent structures (that nonetheless follow roughly the
same rules as the small structures and can be translated into them for
faster processing). I guess what I'm trying to say is we have different
ideas about how much of mind is implemented by content, the sort of stuff we
humans would regard as *transferable* knowledge - the kind of knowledge that
we communicate through books. I think you ascribe more mind to transferable
content than I. I am not saying that you ascribe all mind to transferable
content, but definitely more than I do (and less than Cyc).
> I think that the first AGI will be created by
>
> a) engineering a system embodying cognitive, perception and action
> mechanisms that are only loosely based on human cognitive science
We agree on this, of course, although we have different ideas of what
constitutes "loosely based".
> b) having this system grow from a baby into a useful intelligent mind by a
> combination of: autonomous exploration of digital and physical environments,
> self-organization, goal-oriented self-modification, and explicit teaching by
> humans
I think that the system will grow from a baby into a useful intelligent mind
through the improvement of (I) underlying brainware systems for memory
association, forming concepts in modalities, associating to concepts,
imposing and satisfying concepts, searching for beliefs that satisfy
sequiturs, combinatorial search within design problems and logic problems,
(II) accumulation of reflective realtime skills, accumulation of reflective
beliefs and concepts, accumulation of beliefs, concepts, and skills relative
to virtual microenvironments, optimization of skill/belief/concept
representations and processes, (III) learning to communicate with humans and
make better use of human assistance, and learning to abstractly model the
blackbox "real world" through the interpretation of human-written knowledge.
A tremendous part of an AI is brainware. The most important content - maybe
not the most content, but the most important content - is content that
describes things that only AIs and AI programmers have names for, and much
of it will be realtime skills that humans can help learn but which have no
analogue in humans.
You think my design is too complex. Okay. Nonetheless, the more complex a
design is, the more mind arises from the stuff that implements that design,
and the more opportunities there are to improve mind by improving the
implementation (never mind actually *improving the design*!) I think that
the more specific a model one has of mind, the more ways you'll be able to
think of improving it. And that, from my perspective, is why I see things
moving faster than your intuitions call for. There's more to improve, and
far more of intelligence is dependent on underlying brainware and on content
that is specific to AI minds. Not all of intelligence, but more of it than
in the Novamente design. And I don't put much reliance at all on abstract
models of the blackbox processes of our "real world", which is, at best,
what the corpus of most human knowledge communicates to an AI.
Everything is about seed AI. The most critical knowledge an AI needs,
especially in the process of growing up, is the knowledge that makes the AI
smarter. This knowledge may be gained in the course of solving
microenvironmental problems posed by humans, and may indeed not be gainable
in any other way, but it is not actually knowledge of how to navigate Rube
Goldberg design problems with a billiard-ball toolset; it is knowledge about
how to think, and an AI will think differently from humans. Humans in
general (as opposed to successful AI researchers) have very little knowledge
of this kind, and what there is will be mostly untransferable because of the
difference in cognitive systems.
The real, underlying problems that an AI needs to solve in order to grow up,
learning how to think - these problems may be solved with the help of
humans, but they will not be drawing on existing explicit human knowledge,
and there will not be a one-to-one correspondence between the AI's
competence in problem domains and the AI's competence at the problem domain
of thinking. Even if the AI got up to "human-level" competence and no
farther at the problem of "thinking using an AI mind", there is no reason
why the *actual thinking* that resulted would be at a human level! My guess
is that it could be transhuman, though infrahumanity is a possibility.
Transferance of human content is not the key issue in seed AI. The parts of
the AI that could conceivably get "stuck at the human level" are the
programming quality of underlying brainware, and reflective skills that are
acquired with human assistance. This is not the same level of organization
as actual thinking! In humans these processes are "stuck at the level" of
evolution's competency and internal brainware competency respectively, and
yet we ourselves are stuck at the human level, which is completely
different. If you build an AI with human-competence brainware and
human-competence reflective skills, you have not built a human-level AI! It
could be anywhere, but it won't be human-level; there's no reason why it
would be.
> In terms of part b), having human knowledge to read, and humans to
> communicate with, will be a big help to the system in boosting itself up to
> human-level intelligence, and MUCH LESS of a help to the system in boosting
> itself up further to superhuman intelligence.
>
> I guess what you're saying is that you think learning from human knowledge,
> and explicit education by humans, will not be an important part of getting
> an AGI up to human level. I disagree with you on this, but I can't prove
> I'm right, of course.
I think explicit education by humans will be an important part of
bootstrapping an AI to the level of being able to solve its own problems.
By the time human knowledge is even comprehensible to the AI, most of the
hard problems will have already been solved and the AI will probably be in
the middle of a hard takeoff.
> If an AI is not taught by us, and doesn't fill its mind with books we've
> written and theorems we've proved, how is it going to get intelligent?
> Purely by interacting with the environment on its own? This sounds a lot
> less efficient to me... and also a lot less likely to result in a
> human-sympathetic AI...
It will learn intelligence by solving human-posed problems whose actual
solutions are not as important as the AI learning to think in the course of
solving the problem; and, of roughly equal importance, will improve in
underlying hardware intelligence as the humans, and later the AI, create
more and more powerful brainware.
> I do NOT envision "pouring human content into an AI", in terms of directly
> force-feeding its mind with human knowledge databases a la Cyc. What I
> envision is that an AI will turn itself from a baby into a mature mind by
> learning from human teachers and by studying human knowledge, including
> books and mathematics and software, and perhaps also explicit knowledge DB's
> like Cyc.
You are correct in that I do not see this as being very useful. A little
useful, maybe; not a lot useful.
> And I believe that this knowledge will accelerate its development
> to the roughly human-level, much beyond the pace that would be possible if
> all this knowledge and all this teaching were not available.
> And I agree with that -- the question is *how fast* will the AI be able to
> improve itself.
>
> It's a quantitative question. Your intuitive estimate is much faster than
> mine...
Ben, you're the one who insists that everything is "intuition". I am happy
to describe your estimates as "intuitions" if you wish, but I think that
more detailed thoughts are both possible and desirable.
> I doubt that very much. I think that a key part of getting a baby AI to
> useful human-adult-level intelligence will be imbibing human patterns of
> thought and interaction -- learning from people via reading text and
> databases, and via interaction. Hence I think the first AGI, even if built
> on radically nonhuman data structures and dynamics, will have a lot of
> humanity in its emergent mind-patterns at first...
You seem to think that you create a general intelligence with all basic
dynamics in place, thereby creating a baby, which then educates itself up to
to human-adult-level intelligence, which can be done by studying signals of
the kind which human adults use to communicate with each other. I don't see
this as likely. The process of going from baby to adult is likely to be
around half brainware improvement and half the accumulation of knowledge
that cannot be downloaded off the Internet. The most the corpus of human
knowledge can do is provide various little blackbox puzzles to be solved,
and most of those puzzles won't be the kind the AI needs to grow.
> I think you place way too much faith in "bootstrapping" style
> self-organization. Creating a smart system that can modify its own code,
> and giving it good perceptors and actuators, will lead to a mature, usefully
> self-improving and world-understanding AGI *eventually*, but how long will
> it take? I think the process will go faster by far if teaching by humans is
> a big part of the process. I also think that a human-friendly AGI is more
> likely to result if the system achieves its intelligence partly thru being
> taught by humans.
Okay, now *you're* misinterpreting *me*. I don't think that AGI can be
bootstrapped to through seed AI, nor that human interaction is unimportant.
Humans are a seed AI's foundations of order. Humans will teach the AI but
what they will teach is not the corpus of human declarative knowledge. What
they teach will be domain problems that are at the right level the AI needs
to grow, and what the AI will learn will be how to think.
> You may say that with good enough learning methods, no teaching is
> necessary.
Incorrect. What I am saying is that what is taught will not be the corpus
of human declarative knowledge, nor would trying to teach that corpus prove
very useful.
> Maybe so. I know you think Novamente's learning methods are too
> weak, though you have not explained why to me in detail, nor have you
> proposed any concrete alternatives. However, I think that *culture and
> social interaction* help us humans to grow from babies into mature adult
> minds in spite of the weaknesses of our learning methods,
Because humans have evolved to rely on culture and social interaction does
not mean that an AI must do so. From an AI's-eye-view, the "humans" are
external blackbox objects that pose problems which, when the AI solves them,
turns out to lead to the acquisition of reusable reflective skills. (At
least, that's what happens if the humans are doing it right.)
> and I think that
> these same things can probably help a baby AGI to grow from a piece of
> software into a mature AGI capable of directing its activities in a useful
> way and solving hard problems.
I don't think the software of a baby AGI will much resemble the software of
a mature AGI, and I say "AGI", not "seed AI".
> You may say that I'm just anthropomorphizing here, but I don't think so.
> Clearly teaching an AGI will be very different from teaching a human. But
> it seems so dumb not to give our early-stage would-be AGI the benefit of
> human knowledge and intuition, which is considerable, though flawed.
The hardest part of AI is doing *anything* that will have a benefit. Most
things won't. Transferring over the corpus of human knowledge, in the form
of those signals in which it is stored for communication between humans,
will accomplish very little. It's not the kind of problem that an AI needs
to grow, or that an early AGI can solve at all.
> And if
> we do get it started with our teaching & our knowledge, then when it
> outstrips us, it will face a new set of challenges. I'm sure it will be
> able to meet these challenges, but how fast? I don't know, and neither do
> you!
And this "I don't know" is used as an argument for it happening at
humanscale speeds, or in a volume of uncertainty centered on humanscale speeds?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT