From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Thu Jan 31 2008 - 15:36:21 MST
--- Thomas McCabe <pphysics141@gmail.com> wrote:
> * A computer can never really understand
> the world the way humans
> can. (Searle's Chinese Room)
> o Rebuttal synopsis: This idea is
> mainly the result of
> previous, abandoned AI projects, where (say) a
> cow was represented by
> a single string variable, "COW". Obviously,
> using the word "COW" isn't
> going to make the computer understand the full
> range of experiences we
> associate with real-life cows. However, this
> problem is specific to
> old-fashioned AI systems, *not* AIs or
> computers in general.
>
The most direct rebuttal to the Chinese Room is,
as Hofstadter and Dennett have said, the Systems
Answer: if the system consistently translates and
answers in Chinese, then the system as a whole
does, indeed, understand Chinese. This is true
whether the system resembles our brain a lot, or
hardly at all. Bear in mind, of course, that very
dumb systems can seem intelligent, cf. Eliza. The
Chinese Room assumes that the answers are
genuinely intelligent and not canned.
Tom Buckner
____________________________________________________________________________________
Never miss a thing. Make Yahoo your home page.
http://www.yahoo.com/r/hs
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT