From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sun Jun 23 2002 - 11:56:42 MDT
Ben Goertzel wrote:
> In my view, the most important content of an AGI mind will be things that
> neither the AI or its programmers can name, at first. Namely: *abstract
> thought-patterns, ways of organizing ideas and ways of approaching
> problems*, which we humans use but know only implicitly, and which we will
> be able to transmit to AI minds implicitly through interaction in
> appropriate shared environments..
What kind of knowledge is this implicit knowledge? How will the AI absorb
it through interaction? Let's take the mnemonic experiential record of a
human interaction; what kind of algorithms will absorb "abstract
thought-patterns" from the record of human statements?
But then, you know my position on expecting things to happen without knowing
how they work...
> Of course, this general statement is not true. Often, in software
> engineering and other kinds of engineering, a very complex design is HARDER
> to improve than a simple one.
Evolution managed to sneak around this trap. An AI team will have to do so
as well; for example, through constructing plugin satisficing architectures
where any given task can be performed "well enough" through several
> I think that a lot of transfer of thought-patterns will happen *implicitly*
> through interaction in shared environments.
> For this to happen, explicit declarative knowledge of thought patterns is
> not required, on the part of the human teachers.
Okay. If you don't know in advance how this will work, I predict that
nothing will happen.
> I doubt this is how things will go. I think human knowledge will be
> comprehensible by an AI *well before* the AI is capable of drastically
> modifying its own sourcecode in the interest of vastly increased
I would expect the AI's understanding of source code to run well ahead of
its understanding of human language at any given point. The AI lives right
next to source code; human language is located in another galaxy by comparison.
> I think that humans will teach the AGI more than just "domain problems at
> the right level," I think that by cooperatively solving problems together
> with the AGI, humans will teach it a network of interrelated
> thought-patterns. Just as we learn from other humans via interacting with
I'm not sure we learn thought-patterns, whatever those are, from other
humans; but if so, it's because evolution explicitly designed us to do so.
Standing 'in loco evolution' to an AI, you need to know what
thought-patterns are, how they work, and what brainware mechanisms and
biases support the learning of which thought-patterns from what kind of
> I agree, it does not mean that an AI *must* do so. However, I hypothesize
> that to allow an AI to learn its initial thought-patterns from humans based
> on experiential interaction, is
> a) the fastest way to get to an AGI
> b) the best way to get an AGI that has a basic empathy for humans
Empathy is a good analogy, unfortunately. Humans are socialized by
interacting with other humans because we are *explicitly evolutionarily
programmed* to be socialized in this way. We don't pick up empathy as an
emergent result of our interaction with other humans. Empathy is hardwired.
It may be hardwired in such a way that it depends on environmental human
interaction in order to develop, but this does not make it any less
hardwired. An AGI is not going to automatically pick up basic human empathy
from interacting with humans any more than a rock would develop empathy for
humans if constantly passed from hand to hand. *Nothing* in AI is
automatic. Not morality, not implicit transfer of thought-patterns, not
socialization, *nada*. If you don't know how it works, it won't!
Besides, I don't think that hardwired empathy is the way to go, and the kind
of automatically, unintentionally acquired empathy you postulate strikes me
as no better. I don't think you can expect a seed AI to be enslaved by its
brainware in the same way as a human.
> Yes, you see more "code self-modification" occurring at the
> "pre-human-level-AI" phase than I do.
> This is because I see "intelligent goal-directed code self-modification" as
> being a very hard problem, harder than mastering human language, for
This honestly strikes me as extremely odd. Code is vastly easier to
experiment with than human language; the AI can accumulate more experience
faster; there are no outside references to the black-box external world; the
AI can find its own solutions rather than needing the human one; and the AI
can use its own concepts to think rather than needing to manipulate
human-sized concepts specialized for human modalities that the AI may not
even have. Code is not easy but I'd expect to be a heck of a lot easier
> Your argument was that "there's nothing special about human level
> intelligence." I sought to refute that argument by pointing out that, to
> the extent an AGI is taught by humans, there is something special about
> human level intelligence after all. Then you countered that, in your
> envisioned approach to AI, teaching by humans plays a smaller role than in
> my own envisioned approach.
Not a smaller role; a very different role teaching a very different kind of
> And indeed, this suggests that if seed AI were
> achieved first by your approach rather than mine, the gap between human
> level and vastly superhuman level intelligence would be less.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT