RE: Curriculum for AI

From: Colin Hales (colin@versalog.com.au)
Date: Wed Jan 01 2003 - 05:13:21 MST


Ben Goertzel:
>
> Gordon Worley wrote:
> > Going to four levels of learning doesn't make any sense, anyway.
> >
> > First, let's subtract 1 from all of those levels; it's a
> bit easier to
> > keep track of them then.
> >
> > At level 0 there is no learning involved, per se. The mind is just
> > being force fed facts: they magically appear in its memory.
> >
> > At level 1 there is normally learning involved that we are
> all familiar
> > with. It's the kind that goes on between children and
> parents, goes on
> > while reading, and goes on in schools (or at least is supposed to).
> >
> > Level 2 is more interesting. This is where you learn
> techniques like
> > associating memories with each other to strengthen your ability to
> > remember them and that practice helps you get better at
> complex tasks
> > like algebra and calculus.
> >
> > Level 3 is flat out exciting. Now you can learn to learn
> better. If
> > you can't understand the theory of Friendly AI, all you
> have to do is
> > reprogram yourself to be able to learn it. It goes further
> than that,
> > though. At this level you already get the ability to reprogram
> > yourself to reprogram yourself better. You're working at that level
> > already, so it's no problem to just jump over and rewrite
> the running
> > code (assuming the system supports it, but in the general case it's
> > already available).
> >
> > If a level 4 exists, this would be extending to ontotechnology but
> > would be debatable to continue to think of this in terms of
> levels of
> > learning. That would certainly still be an applicable domain, but
> > things were already starting to open up at level 3 and the seams are
> > busted wide open at level 4.
>
> Bateson reckons that level 3 learning occurs in humans only
> very slowly,
> i.e. over years via complex forms of "cognitive maturation."
>
> Basically, your comments are strongly in accordance with Bateson's
> reflections on the topic, which is not surprising...
>
> -- Ben G
>
>
>

I know nothing of Bateson. Sounds worth a look. I hadn't actually found
anyone talking like this although I expected it to be somewhere...nothing
new under the sun and all that.

> Humans would fail a test suite created for dolphins, and vice versa --
etc.
> etc. etc.
> I note that IQ tests, SAT tests, and the like, do not take explicit
account
> of embodiment.

It's assumed as human I suppose. When we talk about this kind of
classification we're really talking 'species', aren't we? Human is not like
dog training is not like rat training. In a future world populated with
various AI 'species' we're going to have to deal with appropriate training
for each. We have a new taxonomy/morphology to define without the biological
constraints to help. We have to find the hidden othogonal axes in the
biological vector space of classification, add new non-biological axes (eg
groundedness), place the species in that space and then we train and test to
suit. Sounds like a new industry - work out the AI speciation tree.

> I think that tests involving embodiment are interesting. But I don't
think
> that a test NOT involving embodiment is intrinsically inapplicable to
> embodied AI's.
> Indeed, if the "embodiment is necessary for AI" theory is correct, then
> embodied AI's should do far better on tests NOT involving embodiment in
any
> explicit way. No?

If you mean an AI trained embodied, subsequently disembodied and then tested
against another AI somehow trained without embodiment in the same thing -
yes.

Maybe the whole embodiment discussion stemmed from not understanding which
meta-learning level any particular AI occupies and the target environment of
the AI (incl groundedness issues). IMO level 2 embodiment is mandatory.
However, I can see it possible that an intended 'level 2' AI could become,
in effect, relegated to 'level 1' by embodiment, substrate and grounding
choices. Eg The poor AI actually needs rote learning and gets a montessori
environment - tough ask! (is NOMAD in this position?). I can also see a well
implemented level 1 outperforming a poor level 2. An interesting problem
domain indeed. I'm fairly sure 'meta-learning level' is one of the
orthogonal axes mentioned above.

With this and Ben's suggested training options, I think we've probably
complicated Michael's training program enough now! :-) BTW: Apologies to
Peter if I "hath projected too much" (from my ken of his AGI) in my original
post.

cheers,

Colin Hales



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT