From: Giu1i0 Pri5c0 (firstname.lastname@example.org)
Date: Sat Jan 29 2005 - 23:32:32 MST
New Scientist: Computers can learn the meaning of words simply by
plugging into Google. The finding could bring forward the day that
true artificial intelligence is developed.
Paul Vitanyi and Rudi Cilibrasi of the National Institute for
Mathematics and Computer Science in Amsterdam, the Netherlands,
realised that a Google search can be used to measure how closely two
words relate to each other. For instance, imagine a computer needs to
understand what a hat is.
To do this, it needs to build a word tree - a database of how words
relate to each other. It might start with any two words to see how
they relate to each other. For example, if it googles "hat" and "head"
together it gets nearly 9 million hits, compared to, say, fewer than
half a million hits for "hat" and "banana". Clearly "hat" and "head"
are more closely related than "hat" and "banana".
The technique has managed to distinguish between colours, numbers,
different religions and Dutch painters based on the number of hits
they return, the researchers report in an online preprint.
The pair's results do not surprise Michael Witbrock of the Cyc project
in Austin, Texas, a 20-year effort to create an encyclopaedic
knowledge base for use by a future artificial intelligence. Cyc
represents a vast quantity of fundamental human knowledge, including
word meanings, facts and rules of thumb.
Witbrock believes the web will ultimately make it possible for
computers to acquire a very detailed knowledge base. Indeed, Cyc has
already started to draw upon the web for its knowledge. "The web might
make all the difference in whether we make an artificial intelligence
or not," he says.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT