From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Oct 17 2005 - 12:03:07 MDT
Chris Capel wrote:
> To be clear, these are your comments and not a quote? You want to
> discuss this with the list?
>
> On 10/16/05, Woody Long <ironanchorpress@earthlink.net> wrote:
>
>>Some points --
>>
>>1. "Humanoid intelligence requires humanoid interactions with the world" --
>>MIT Cog Project website
>
>
> Granted, but SL4 isn't really interested in humanoid intelligence. The
> position of the SIAI and many on this list, if I may speak for them,
> is that strictly humanoid intelligence would not likely be
> Friendly--it would be terribly dangerous under recursive
> self-modification, and likely lead to an existential catastrophe.
> Friendly AI is probably not going to end up being anything close to
> "humanoid".
You do not speak for the entire SL4 list, unless or until I (at least)
unsubscribe from it.
As far as I am concerned, the widespread (is it really widespread?) SL4
assumption that "strictly humanoid intelligence would not likely be
Friendly ...[etc.]" is based on a puerile understanding of, and contempt
of, the mechanics of human intelligence.
Whereof you disdain not to understand, thereof you should not speak.
Richard Loosemore
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:04 MST