Re: Seed AI (was: How hard a Singularity?)

From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jun 23 2002 - 13:32:57 MDT


At 01:56 PM 6/23/2002 -0400, Eliezer wrote:
>Ben Goertzel wrote:
>>Of course, this general statement is not true. Often, in software
>>engineering and other kinds of engineering, a very complex design is HARDER
>>to improve than a simple one.
>
>Evolution managed to sneak around this trap. An AI team will have to do
>so as well; for example, through constructing plugin satisficing architectures

Yes, it did, given massive amounts of time. We are not talking about
hundreds of thousands of years or more, though...

>>I doubt this is how things will go. I think human knowledge will be
>>comprehensible by an AI *well before* the AI is capable of drastically
>>modifying its own sourcecode in the interest of vastly increased
>>intelligence.
>
>I would expect the AI's understanding of source code to run well ahead of
>its understanding of human language at any given point. The AI lives
>right next to source code; human language is located in another galaxy by
>comparison.

There is a universe between understanding source code and understanding
what changes will improve your intelligence, however. Certainly it could
write a software application, but knowing what changes will improve its own
intelligence is a whole different story.

>>I think that humans will teach the AGI more than just "domain problems at
>>the right level," I think that by cooperatively solving problems together
>>with the AGI, humans will teach it a network of interrelated
>>thought-patterns. Just as we learn from other humans via interacting with
>>them.
>
>I'm not sure we learn thought-patterns, whatever those are, from other
>humans; but if so, it's because evolution explicitly designed us to do so.
>Standing 'in loco evolution' to an AI, you need to know what
>thought-patterns are, how they work, and what brainware mechanisms and
>biases support the learning of which thought-patterns from what kind of
>environmental cues.

Lets use "design patterns" in software development as an
example. Understanding the use of design patterns offers a major
improvement in software architecture & engineering capability. These days
I use this type of thinking extensively when designing systems. However, I
can easily remember a time before I knew of (or thought along the lines of)
design patterns. Thus, I have obviously learned to think in this
way. Moving from procedural to object oriented programming also required a
major shift in thinking (it took me roughly a year to digest and comprehend
WHY it was good and how it was useful). But today I can't imagine not
thinking in OOP terms. Thus we can learn thought-patterns via human
knowledge and interaction. Unfortunately, I believe that we don't
understand the most basic & powerful thought patterns which are used by the
human mind (because they are below the conscious level).

>>I agree, it does not mean that an AI *must* do so. However, I hypothesize
>>that to allow an AI to learn its initial thought-patterns from humans based
>>on experiential interaction, is
>>a) the fastest way to get to an AGI
>>b) the best way to get an AGI that has a basic empathy for humans
>
>Empathy is a good analogy, unfortunately. Humans are socialized by
>interacting with other humans because we are *explicitly evolutionarily
>programmed* to be socialized in this way. We don't pick up empathy as an
>emergent result of our interaction with other humans. Empathy is hardwired.

Um, I don't know that I completely agree with that. It is possible for the
degree of empathy a given human has to change over time. Thus I would not
say that it is hardwired.

>It may be hardwired in such a way that it depends on environmental human
>interaction in order to develop, but this does not make it any less
>hardwired. An AGI is not going to automatically pick up basic human
>empathy from interacting with humans any more than a rock would develop
>empathy for humans if constantly passed from hand to hand. *Nothing* in
>AI is automatic. Not morality, not implicit transfer of thought-patterns,
>not socialization,
>*nada*. If you don't know how it works, it won't!

Well, unless empathy was a natural consequences of general
intelligence. Can you prove otherwise? The same goes for the other
aspects you mention. They may not be "automatic", but no one really knows
either way. If you feel strongly one way or the other feel free to say so,
but be aware (and make others aware) that it is purely your "feeling" and
is not based on any factual evidence what-so-ever.

>>Yes, you see more "code self-modification" occurring at the
>>"pre-human-level-AI" phase than I do.
>>This is because I see "intelligent goal-directed code self-modification" as
>>being a very hard problem, harder than mastering human language, for
>>example.
>
>This honestly strikes me as extremely odd. Code is vastly easier to
>experiment with than human language; the AI can accumulate more experience
>faster; there are no outside references to the black-box external world;
>the AI can find its own solutions rather than needing the human one; and
>the AI can use its own concepts to think rather than needing to manipulate
>human-sized concepts specialized for human modalities that the AI may not
>even have. Code is not easy but I'd expect to be a heck of a lot easier
>than language.

But the code for an extremely complex system is very, very hard to
experiment with in any *intelligent* manor. Much as I imagine that if you
gave all the code to a working pre-human AI to a "typical" programmer they
would have little or no idea what to do with it. They could run it and
poke around at it a bit, but making an improvement would be by pure luck,
not design. And, if the matter is further complicated by having to clear
reference point by which to gain improvement vs setback, progress becomes
very difficult (at best).

You seem to frequently miss the fact that there is a vast difference
between code and the system the code implements.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT