RE: Seed AI (was: How hard a Singularity?)

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 23 2002 - 12:33:05 MDT


hi,

> What kind of knowledge is this implicit knowledge? How will the
> AI absorb
> it through interaction? Let's take the mnemonic experiential record of a
> human interaction; what kind of algorithms will absorb "abstract
> thought-patterns" from the record of human statements?

This is a kind of "procedure learning" but involving abstract cognitive
procedures rather than physical-world action procedures.... In Novamente it
would be handled by the generic "schema learning" mechanisms with
appropriate parameter settings.

> > I think that a lot of transfer of thought-patterns will happen
> *implicitly*
> > through interaction in shared environments.
> >
> > For this to happen, explicit declarative knowledge of thought
> patterns is
> > not required, on the part of the human teachers.
>
> Okay. If you don't know in advance how this will work, I predict that
> nothing will happen.

I have a fairly detailed understanding of how this could work...

A good example is learning the knowledge of how to do mathematical proofs.
>From seeing how humans have done proofs, the system can learn patterns of
proof -- what kinds of proof strategies and tactics often work in what kinds
of situations.

This knowledge about proof strategies and tactics is not explicitly stated
in math books (except minimally and occasionally) but everyone who learns
advanced math, learns it by induction from reading proofs and asking
questions about them...

A related example is learning how to program. The implicit ways and means
of software engineering are learned by humans largely by example. A lot of
this is learning abstract cognitive schema for analyzing various sorts of
problems, translating them into other problems, etc. WE try to translate
this kind of knowledge into declarative form, but only with limited success.
In practice, it's learned by reading others' code, by coding expereince, and
by collaboration with experienced coders. Subtract out the "reading others'
code" and "collaboration" parts and learning how to code would be a LOT
slower because one would have to brew all one's own coding-related cognitive
schema.

> > I doubt this is how things will go. I think human knowledge will be
> > comprehensible by an AI *well before* the AI is capable of drastically
> > modifying its own sourcecode in the interest of vastly increased
> > intelligence.
>
> I would expect the AI's understanding of source code to run well ahead of
> its understanding of human language at any given point. The AI
> lives right
> next to source code; human language is located in another galaxy
> by comparison.

It's true, but source code is much much more complicated than human
language...

> > I think that humans will teach the AGI more than just "domain
> problems at
> > the right level," I think that by cooperatively solving
> problems together
> > with the AGI, humans will teach it a network of interrelated
> > thought-patterns. Just as we learn from other humans via
> interacting with
> > them.
>
> I'm not sure we learn thought-patterns, whatever those are, from other
> humans;

If you were given a complex theorem to prove, how would you approach it?
Would you use patterns and strategies inspired by proofs others had done in
the past, that you'd read?

If not, you'd have a very low chance of success, at least in many branches
of math, such as advanced analysis or number theory...

> > This is because I see "intelligent goal-directed code
> self-modification" as
> > being a very hard problem, harder than mastering human language, for
> > example.
>
> This honestly strikes me as extremely odd. Code is vastly easier to
> experiment with than human language; the AI can accumulate more
> experience
> faster; there are no outside references to the black-box external
> world; the
> AI can find its own solutions rather than needing the human one;
> and the AI
> can use its own concepts to think rather than needing to manipulate
> human-sized concepts specialized for human modalities that the AI may not
> even have. Code is not easy but I'd expect to be a heck of a lot easier
> than language.

But code is hard, it involves very complex inferences, which is why most
people can't code.

Natural language does involve a lot of breadth of knowledge and experience
but the thinking involved is mostly relatively simple.

A sentence like "Every man has a dog whom every girl likes to call by a
special pet name" is rare in English, but similar constructs are commonplace
in programming...

Of course, AGI's can have an easier time than humans in dealing with complex
logical constructions, but still, I think coding requires much more advanced
cognitive schema/heuristics than human language processing.

Note that I'm not talking about "exactly human-simulative language
processing", just about "acceptably communicative languauge processing" and
"decent language understanding, involving question-asking about unfamilar
domains." of course an AGI's communications will seem alien and non-human,
even using human language, until way past the human-equivalent level
(simulating another intelligence being a harder problem than being one's own
intelligence)

> > And indeed, this suggests that if seed AI were
> > achieved first by your approach rather than mine, the gap between human
> > level and vastly superhuman level intelligence would be less.
>
> Quite.

But of course, I consider it very unlikely that your approach will lead to
seed AI in the foreseeable future ;>

However, I'd be very happy to be proved wrong!!

-- ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT