From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Wed Jan 01 2003 - 07:20:52 MST
Dear SL4,
Colin wrote:
>
> The issue of 'intuition' and other comments in the doc about
> pre-configured knowledge indicates that there is something in need of
> more attention. Can I suggest explicitly recognising it?
The issue of 'pre-configured knowledge' is IMO central to understanding
how an AGI gets to be an AGI. A being's initial ability to learn
*anything* relies on what is already there in its brain. I hadn't
wanted to discuss that subject in the Curriculum document, it is a huge
idea-area which is not essential to understanding or using the Lessons
and Games. Perhaps I will have to write a chapter laying out something
like your 'learning levels' and where abouts the curriculum is aimed, to
answer the questions you have raised.
>
> 1) Automaton. Fixed learning about X,Y,Z...No training.
> 2) Learner. Learns X,Y,Z.....
> 3) Meta Learner. Learns to learn X,Y,Y....
> 4) Meta-Meta Learner. [snip]
>
Nice Levels. 3) is the one I'm interested in, and targeting the
curriculum for. 1) is like DBMS. 2) is like an expert system, 'narrow'
AI. 4) can be bootstrapped from 3) and so we (unenhanced humans) will
never build it.
>
> The assumption in the given training course is class 2). At least it
> appears to be.
>
3) is where I'm aiming. But I should probably say that in the document.
>
> "Symbol grounding will occur when the AI connects an internal model
> with external reality...."
> Are you sure your procedure actually does that?
>
My lessons and game do not do that. Symbol grounding happens within the
AGI, and I have tried to avoid talking about the AGI 'internals'. The
microdomains described in Semesters 1 thru 4 provide *opportunity* for
symbol grounding... but that's just in a microdomain - that is not
base-reality. Only in Semester 5 do I suggest a real-world (and
real-time) interface, and that would be the first 'external reality'
input, in the way I understand the phrase.
>
> Like Peter, I see assumptions (in the intro) that belie an implicit
> and specific philosophical/design position.
>
I have no philosophical position on AI design, other than: 'It had
better damn well be Friendly!' IMO whatever works is plenty good
enough. Within the Curriculum document I have taken a "what it looks
like from the outside" point of view (or tried to). I will revise it to
make this clearer.
>
> That's my US$0.02 = A$0.04ish. A bargain! :-)
>
A bargain indeed! Thank you for the input. #:)
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT