Re: Thoughts on AI testing

From: Xavier Lumine (
Date: Thu Oct 25 2001 - 12:07:11 MDT

That's a specialized problem a computer could be taught how to do fairly
easily and is an example of Classical AI (i.e. the "wrong" path). I'm fairly
sure teaching a computer to do this kind of analysis is not a good
indication of human intelligence.

>From: Gordon Worley <>
>Subject: Thoughts on AI testing
>Date: Wed, 24 Oct 2001 19:37:04 -0400
>A while back there was a short thread about how to test how
>smart/intelligent AIs are. A number of interesting ideas came out of
>this and suddenly, about an hour ago while sitting in my Statistics
>II class, I realized a great way to test if an AI is of human level
>intelligence: if it can choose the correct statistical method with
>which to analyze data (computers are already exceedingly good at
>doing the analysis and make certain analyses possible that humans
>couldn't reasonably do on their own). In order to do this, one must
>understand the content of the problem (what the numbers mean), the
>nature of the data (qualitative or quantitative; parametric or
>nonparametric, etc.), and what kind of results are expected (there's
>more going on, but the other understandings could be seen as
>subunderstandings of those I listed). This means the AI must be able
>to read some kind of language, understand what it means, and have the
>heuristics to take the information ve is given and make the right
>choices about tests.
>Personally, I've never been a huge fan of the Turing Test, but this
>seems like a good alternative that requires the same kinds of skills,
>just in a different domain.
>Gordon Worley `When I use a word,' Humpty Dumpty
> said, `it means just what I choose
> it to mean--neither more nor less.'
>PGP: 0xBBD3B003 --Lewis Carroll

Get your FREE download of MSN Explorer at

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT