From: Mikko Särelä (msarela@cc.hut.fi)
Date: Wed Jan 18 2006 - 01:24:47 MST
On Tue, 17 Jan 2006, Daniel Radetsky wrote:
> > My more important point is that Woody's test is untestable. We have
> > no way to evaluate whether a machine is conscious of the meaning of
> > its inputs in the same way that a human is.
>
> Are you sure? Perhaps when we have a more complete understanding of the
> way that humans are conscious of the meaning of their inputs, we will
> realize that we can determine whether a given machine is similarly
> conscious. It sounds like you're just making an argument from lack of
> imagination. If you disagree, tell me why I should believe such a test
> is impossible, rather than nonexistent.
Ah, but in science the burden of proof is on the side who claims a
principle. So if you or Searle claim this method, you should also provide
a method with which is can be tested. Until that is done, it should be
considered untestable.
And if testing it requires understanding consciousness, it cannot really
help us understand consciousness, can it.
-- Mikko Särelä http://thoughtsfromid.blogspot.com/ "Happiness is not a destination, but a way of travelling." Aristotle
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT