From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Oct 17 2005 - 12:18:16 MDT
Richard Loosemore wrote:
>
> As far as I am concerned, the widespread (is it really widespread?) SL4
> assumption that "strictly humanoid intelligence would not likely be
> Friendly ...[etc.]" is based on a puerile understanding of, and contempt
> of, the mechanics of human intelligence.
Untrue. I spent my first six years from 1996 to 2002 studying the
mechanics of human intelligence, until I understood it well enough to
see why it wouldn't work. I suppose that in your lexicon, "Complex
Systems Theory" and "mechanics of human intelligence" are synonyms. In
my vocabulary, they are not synonyms, and studying such mere matters as
neuroscience and cognitive psychology counts as trying to understand the
mechanics of human intelligence, whatever my regard for "Complex Systems
Theory" as a source of useful, predictive, engineering-helpful
hypotheses about human intelligence. Disdain for your private theory of
human intelligence is not the same as disdain for understanding the
mechanics of human intelligence.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:04 MST