From: Gordon Worley (redbird@rbisland.cx)
Date: Thu Oct 25 2001 - 19:48:44 MDT
At 2:07 PM -0400 10/25/01, Xavier Lumine wrote:
>That's a specialized problem a computer could be taught how to do
>fairly easily and is an example of Classical AI (i.e. the "wrong"
>path). I'm fairly sure teaching a computer to do this kind of
>analysis is not a good indication of human intelligence.
Well, maybe you've never done much statistical analysis, but the
problem of deciding what is the appropriate method to apply to the
data and then deciding on an appropriate interpretation requires a
great deal of general intelligence. If I honestly thought Classical
AI could do this, I wouldn't have brought this up here. Of course,
just like any test, one could try to write a Classical AI that might
pretend to be able to do this and might do pretty well within a
limited domain, but given an infinite domain of questions the
Classical AI will eventually fail. Duh, we all know this. So, given
we have a GIAI, the Gordon Test (he he, I get this one named after
me) should be a good method of checking if it is of at least human
level intelligence.
In case there's any confusion, choosing an appropriate method is,
much like programming or using natural language, an art rather than
an engineering task, both tasks I think most folks here would agree
Classical AI can't really do, but a GIAI could.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose redbird@rbisland.cx it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT