From: Colin Hales (colin@versalog.com.au)
Date: Tue Dec 31 2002 - 18:22:54 MST
Michael Roy Ames wrote:
>Let the games begin!
<snip>
>These are links to three formats of the notes:
Word97
http://www.geocities.com/michaelroyames/Curriculum-for-AI-20021223.doc
HTML - Readable stuff starts after a long table of contents.
http://www.geocities.com/michaelroyames/Curriculum-for-AI-20021223.htm
RTF (Works with Abiword)
http://www.geocities.com/michaelroyames/Curriculum-for-AI-20021223.rtf
< more snip>
>
> What problems do you see with this approach?
>
Peter Voss wrote...
>I strongly believe that during AGI proof-of-concept development, it is
crucial that we not only look at what the system accomplishes, but *how* it
does it - i.e. the design's construct validity.
<snip>
>One of the really difficult issues is designing these development programs
without embodying too much of the designer's specific AGI theory/
architecture.
Hi Michael et al,
I have had a look at your doc. Two issues, IMO:
Issue a. Learner Type
-------------------------------------
The issue of 'intuition' and other comments in the doc about pre-configured
knowledge indicates that there is something in need of more attention. Can I
suggest explicitly recognising it? The choices are roughly classes like
'shock levels' :-)
1) Automaton. Fixed learning about X,Y,Z...No training.
2) Learner. Learns X,Y,Z.....
3) Meta Learner. Learns to learn X,Y,Y....
4) Meta-Meta Learner. This machine is Eliezer's SL4 self-modifying
subliming beastie! Perfoms brain surgery on itself. Too hard for my poor
brain.
The assumption in the given training course is class 2). At least it appears
to be. In my rough personal view of things, Class 2 is an AI and Class 3 is
an AGI. I hold that that the training of class 3) is a whole different thing
to class 2). A class 3 embodied AGI goes to Montessori (eg edelman's
roaming block eater NOMAD? at the neurosciences institute). A somewhat
different learning regime.
Issue b. The symbol grounding issue:
----------------------------------
"Symbol grounding will occur when the AI connects an internal model with
external reality...."
Are you sure your procedure actually does that? There is an assumption that
the nature of the sensor, connection and relation to computational substrate
are irrelevant. I would hold that the grounding, in the assumed AI of the
training course, is not in the AI at all, but in the human observers of it.
=========================================
So....
"> What problems do you see with this approach?"
Like Peter, I see assumptions (in the intro) that belie an implicit and
specific philosophical/design position. Renamed, say, "Training for a Class
2) , Human-grounded unembodied AI", it would recognise context but doesn't
achieve much else except to let the readers doing class 3 with embodiment to
look elsewhere in a more generalised training standard - another chapter in
a future version of your training program, perhaps. Some sort of calibrated
teaching/testing in each classification has to be a good thing for design
comparisons.
I hope this is of some use.
That's my US$0.02 = A$0.04ish. A bargain! :-)
Cheers to you all for Y2K+3.
regards,
Colin Hales
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT