RE: Human motivations was Two draft papers:

From: Keith Henson (hkhenson@rogers.com)
Date: Wed Jun 14 2006 - 23:14:35 MDT


At 08:09 PM 6/14/2006 +0000, "H C" <lphege@hotmail.com> wrote:
>>From: Keith Henson <hkhenson@rogers.com>
>>Reply-To: sl4@sl4.org
>>To: sl4@sl4.org
>>Subject: Human motivations was Two draft papers: Date: Tue, 13 Jun 2006
>>18:12:02 -0400
>>
>>At 11:34 PM 6/12/2006 -0700, Eliezer wrote:
>>>Robin Hanson wrote:
>>
>>snip
>>
>>>>You warn repeatedly about how easy is is to fool oneself into thinking
>>>>one understands AI, and you want readers to apply this to their
>>>>intuitions about the goals an AI may have.
>>>
>>>The danger is anthropomorphic thinking, in general. The case of goals
>>>is an extreme case where we have specific, hardwired, wrong
>>>intuitions. But more generally, all your experience is in a human
>>>world, and it distorts your thinking. Perception is the perception of
>>>differences. When something doesn't vary in our experience, we stop even
>>>perceiving it; it becomes as invisible as the oxygen in the air. The
>>>most insidious biases, as we both know, are the ones that people don't see.
>>
>>I agree.
>>
>>Perhaps understandability is an argument to imbue AIs with *some* human
>>motivations, just so we can have a chance of understanding them.
>>
>>Humans have a few really awful psychological traits but activating the
>>ones we know about might be avoidable.
>>
>>Keith Henson
>
>An argument?

Reason perhaps would be a better word choice.

>Maybe it's an interesting thing to consider in relation to Friendliness,
>but it is of hardly the technical calibre required for it to present any
>kind of argument.

I would say that human motivations (especially in light of evolutionary
psychology) are much better understood than the motivation of hypothetical
AIs. But I would certainly listen to counter arguments.

Keith Henson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT