From: Eliezer S. Yudkowsky (email@example.com)
Date: Thu Jun 27 2002 - 16:11:54 MDT
Smigrodzki, Rafal wrote:
> The AI will have a very complex network of hierarchical goals, as
> well as Bayesian priors, being adjusted to function in a complex
> environment with uncertain information. Don't you think that there might
> be a pervasive change, an accumulation of small biases towards violent
Sure, when the AI is young. When the AI grows up I would expect it to rerun
the programmers perceived considerations involved in their moral decisions,
come to the conclusion that violent solutions were not as desirable as it
was told (assuming the "honorable soldiers" are not actually correct!),
models its own growth for biases that could have been introduced, and washes
out the biases. Of course a Friendly AI has to be able to do this once it
grows up! It's not just a military question!
> Whenever the AI has to make a decision using uncertain data,
> the verification of the validity of the decision might take a long time,
> and might be affected by the current state of the AI. Once in a state
> predisposing to violent behaviors, self-reinforcing patterns could
> emerge, with violence leading to violent responses, and the (perceived)
> need for more violent responses, especially if there are multiple AI's
> involved. Friendliness is difficult if there is no possiblity of
> verifying the motives and actions of game participants, and this would
> be the case with warring AI's. Their motivations would be opaque to each
> other, therefore subject to the same type of social dynamics that occurs
> in humans incapable of monitoring each other's behavior (first strikes,
> arms races, etc.)
That's an argument against turning control of combat over to AIs, which is a
very different thing from an argument against exposing Friendly AIs to
combat. As I said before, if you want to argue against military AI there
are plenty of other ways... it's just that I'm afraid I can't back you if
you argue from Friendly AI theory.
> The FAI might learn the meaning of being Friendly to
> humans but I think it would have difficulty with learning to be friendly
> to the combination of tribal humans with their own tribal-Friendly AI's.
No, I can't see this as any more difficult.
> As you later say, the growing up of the infant AI might be unfavorably
> affected, in the distant analogy to the detrimental effects of early
> childhood emotional trauma in humans.
This is an appealing analogy which happens to be, as best I can tell,
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT