From: Durant Schoon (durant@ilm.com)
Date: Tue Jul 31 2001 - 18:58:36 MDT
> From: Jimmy Wales <jwales@bomis.com>
>
> Of course, a "not quite human" AI isn't likely to be much like a mentally retarded
> human. Probably more like an "idiot savant", i.e. hyperintelligence in some ways,
> but really stupid in other ways. As an example, we are shocked to see some autistic
Hmm, I'm beginning to think that if we have to go full circle back to using computers
which are like idiot-savants which are like retarded humans to understand computers,
the "double-speed 0.5 human brain" analogy isn't helping us get anywhere.
We can accept Carl's back-of-the-envelope calculations to determine when an AI
singularity is likely to occur without worrying about relying on idiot savants
and the retarded to get us there :-)
> I believe that Gordon's original point can be expanded by saying that _depending on
> the particular *way* in which our AI is stupid, and the *reasons* for that stupidity,
> simply throwing more CPU cycles just makes a really fast moron.
Yes, that point seems to be the right track, or at least familiarly known to AI.
Please, someone step in and correct me if I'm wrong, but one school of AI has
maintained that what is missing is Common Sense(*). We can crack Freshman Calculus,
We can crack Grand-Master Human Chess, but we still can't carry out a conversation
(Turing Test(**)).
Cyc is supposed to "understand" our world, right? I really wonder if anyone has tried
to use CYC to solve any of the problems that can only be solved with a decent Common
Sense Engine. Give an agent with access to a CSE and a problematic situation, then
see if the agent can solve the problem.
You are hungry. There is a banana in the refrigerator. In your pocket is a bundle
of lint. What do you do?
Has someone put CYC in Zork, yet?
Has anyone introduced CYC to Eliza?
Has anyone used CYC to build a better bicycle or improve national foreign policy?
You could even start devising and applying wisdom tournaments to CYC...
All we have is tactical battlefield analysis and SecureCYC, which sounds like an
auotmated AI hacking program (or was it CYCSecure?)
Did they succeed with Common Sense? Are they keeping the good stuff to themselves?
The Common Sense Problem seems to be a major reason why computers and idiot
savants *seem* stupid, but I suppose one could build a SeedAI without achieving
Common Sense first (our notions of intelligence are very anthropomorphic).
I, for one, do think that "learning to learn better" *is* the whole shebang. Do
that well first, and you can figure out the rest. Having Common Sense allows an
intelligence to apply verself to the surrounding world in order to understand
and solve practical problems. Something one would want eventually, but not
necessary as a starting point.
(*) The idea is that computers don't have the millions of little rules of common
sense that every child has. "You can pull something with a string, but not push
with it." "People are expected to wear clothes to school." "People who live in
Russia typically speak Russian." "A banana is something people eat." "Don't put
bananas in the refrigerator because they turn brown. People don't like brown
bananas." "You are supposed to share your bases with others." These are "dumb"
things kids know, but computers don't. If computers knew these things, they might
seem a lot smarter and could put all that logical inferencing power to use.
This is the "school" seemingly favored by Minsky and Lenat...though I'm a
Yudkowskiest these days ;-)
Every AI project that attempts to understand the world will need to represent it
somehow, WebMind, SingInst's. Whether you call it "Common Sense" or not does not
matter. It's a big tedious problem. The Cyc team has tried to crack it with domain
experts. Bio/WebMind sounds like they are trying to understand human language first
and then "parse the web". It has to come from somewhere.
(**) The Turing Test is of some interest, but just a milestone on the road from
SeedAI to Singularity.
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT