From: Keith Henson (firstname.lastname@example.org)
Date: Sun Aug 27 2006 - 08:00:56 MDT
At 10:09 AM 8/27/2006 +0100, Russell wrote:
snip (re Richard Loosemore's epistle)
>So what you've got is the unsurprising situation that some people agree
>with you and some don't. I think it best to proceed on that basis, and
>having explained why certain approaches won't work, make specific
>proposals for what approach you think will work.
*IF* there is general agreement that human brains are the seat of something
called intelligence, then a functional copy of one should exhibit intelligence.
Since we don't know nearly enough detail about how brains work (yet!)
that's an approach for the future, but there is a high probability it will
I have long expressed my misgivings about such a project from both the
hardware requirements and the potential dangers from a mass of
psychological traits evolved in the stone age, especially in an uploaded
mind had considerable real world power.
Analogy is always suspect, but Marvin Minsky thinks well of it. Consider
the Wrights. They knew darn well that heavier than air flight was possible
since birds can fly. Some people in those days tried to emulate birds with
flapping machines. The Wrights succeeded because they extracted the wing
idea from (gliding) birds and used props to supply the motive power.
We know AI is possible because natural intelligence exists. So what are
the features of natural intelligence you would need to extract and what can
be replaced with something nature could not (or did not) evolved like a
This archive was generated by hypermail 2.1.5 : Fri May 17 2013 - 04:01:03 MDT