From: Keith Henson (hkhenson@rogers.com)
Date: Tue Jun 13 2006 - 16:12:02 MDT
At 11:34 PM 6/12/2006 -0700,  Eliezer wrote:
>Robin Hanson wrote:
snip
>>You warn repeatedly about how easy is is to fool oneself into thinking 
>>one understands AI, and you want readers to apply this to their 
>>intuitions about the goals an AI may have.
>
>The danger is anthropomorphic thinking, in general.  The case of goals is 
>an extreme case where we have specific, hardwired, wrong intuitions.  But 
>more generally, all your experience is in a human world, and it distorts 
>your thinking.  Perception is the perception of differences. When 
>something doesn't vary in our experience, we stop even perceiving it; it 
>becomes as invisible as the oxygen in the air.  The most insidious biases, 
>as we both know, are the ones that people don't see.
I agree.
Perhaps understandability is an argument to imbue AIs with *some* human 
motivations, just so we can have a chance of understanding them.
Humans have a few really awful psychological traits but activating the ones 
we know about might be avoidable.
Keith Henson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT