From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 09 2002 - 14:26:08 MDT
> Though I suspect that, as Ben
> and Eliezer demonstrated, should Gordon continue his
> efforts to arrive at a "'How-to guide' for thinking
> more rationally", he may just find that philosophy
> stands squarely in his way.
Here is my view on that.
Of course, philosophical issues are pertinent to the issue of how to think
more rationally.
However, even without resolving these issues, it should be possible to make
a useful book on the subject, helpful and informative to many people.
Similarly, philosophical issues arise in AI work -- the definition of
intelligence, mind and consciousness are thorny issues!
But even so, I think it is possible to write useful books and do useful
work -- and even create real AI -- without resolving these philosophical
issues.
Just as we interact with humans all the day, in spite of not having resolved
the philosophical issues involved with the question "are other minds *really
conscious? how can we know?"
The philosophical quest is an important part of nearly any human endeavor,
but it's not a quest that ever arrives at a solution, so one certainly
shouldn't wait for philosophical answers before proceeding with concrete
projects!!
I like this phrase from the theory of algorithms: "Probably Approximately
Correct." The most we can ever hope for?
Of course, this perspective, in itself, draws explicitly on American
pragmatist philosophy ;)
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT