Re: Military Friendly AI

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Jun 27 2002 - 17:12:14 MDT


At 06:11 PM 6/27/2002 -0400, Eliezer S. Yudknowsky wrote:
>Smigrodzki, Rafal wrote:
>>The AI will have a very complex network of hierarchical goals, as well as
>>Bayesian priors, being adjusted to function in a complex environment with
>>uncertain information. Don't you think that there might be a pervasive
>>change, an accumulation of small biases towards violent solutions?
>
>Sure, when the AI is young. When the AI grows up I would expect it to
>rerun the programmers perceived considerations involved in their moral
>decisions, come to the conclusion that violent solutions were not as
>desirable as it was told (assuming the "honorable soldiers" are not
>actually correct!), models its own growth for biases that could have been
>introduced, and washes out the biases. Of course a Friendly AI has to be
>able to do this once it grows up! It's not just a military question!

But, why must it do this? I'm assuming that the military organization who
created such an AI would consider this counter productive and, as such,
attempt to not implement this portion of the system. I'm not claiming this
is wise, practical, possible or any such thing - only a likely action on
their part. Which is the root reason why I think such organizations and
Friendly AI are mutually exclusive.

>>As you later say, the growing up of the infant AI might be unfavorably
>>affected, in the distant analogy to the detrimental effects of early
>>childhood emotional trauma in humans.
>
>This is an appealing analogy which happens to be, as best I can tell,
>completely wrong.

Which part? Are you saying that humans don't experience detrimental
effects later in life related to early childhood emotional trauma, or that
this is not applicable to AI. If the later please explain why.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT