From: Chris Capel (pdf23ds@gmail.com)
Date: Mon Oct 17 2005 - 09:55:44 MDT
To be clear, these are your comments and not a quote? You want to
discuss this with the list?
On 10/16/05, Woody Long <ironanchorpress@earthlink.net> wrote:
> Some points --
>
> 1. "Humanoid intelligence requires humanoid interactions with the world" --
> MIT Cog Project website
Granted, but SL4 isn't really interested in humanoid intelligence. The
position of the SIAI and many on this list, if I may speak for them,
is that strictly humanoid intelligence would not likely be
Friendly--it would be terribly dangerous under recursive
self-modification, and likely lead to an existential catastrophe.
Friendly AI is probably not going to end up being anything close to
"humanoid".
> This means a fully "human intelligent" SAI must be robotic, and as in
I think this is a bit of equivocation here (using this list's
definitions--not your fault). A humanoid intelligence is presumably
one that exhibits all the major, pertinent characteristics of a human.
But the term "human equivalent intelligence", as used by Eliezer
Yudkowsky and most others on this list, means only that the
intelligence is capable of performing all human tasks at levels at
least as well as a human. You probably didn't mean this, but I think
this term is more interesting. More below.
> An SAI requires the same characteristics as an android, and this sets the
> bar for entry into the field of SAI.
SAI here being what? Superintelligent AI? Any intelligence that
performs well above human levels in all areas? I think this is false.
It's not obviously necessary that a human equivalent or higher
intelligence needs any sort of robotic apparatus to attain its status.
If it surpasses human intelligence by orders of magnitude in abstract
reasoning or other areas, then it will be able to use its superiority
in general-purpose ratiocination to fully understand (and thus
experience, as closely as any foreign intelligence could be able to)
human experience and intelligence in all other areas, even those in
which the AI is naturally weak. It will likely be very foreign and
strange in nature, but capable of relating to us in a fully human
manner, because of this.
Chris Capel
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:04 MST