Re: JOIN: Alden Streeter

From: Eliezer S. Yudkowsky (
Date: Mon Aug 26 2002 - 03:17:04 MDT

Alden Streeter wrote:
> So what you're saying then is that we may as well at least _try_ to design
> the AI with our concept of Friendliness built-in, on the off-chance that we
> just naturally happened upon the "right" meaning of Friendliness determined
> entirely by our biological evolution; then if at some later time the
> Friendly AI realizes that the idea of Friendliness that we gave it was
> moronic to begin with, it should be free to alter or discard it?

Essentially. Remember, any alternative proposed would also have a causal
history in our universe; if it is chosen for correct reasons, then those
correct reasons must have been apparent to the programmers, ergo, building
an AI with human-frame-of-reference moral reasoning instead, is not a
fatal error under this scenario.

> Or are you saying the Friendly AI must be _absolutely forbidden_ from
> altering it's human-programmed concept of Friendliness in any way that those
> primitive humans might object to, however irrational those objections might
> actually be to its vastly superior intellect?

This is neither moral nor, as far as I can tell, technically possible.
Human-level intelligence is an inadequate control system, both morally,
and also technically, for transhuman powers. Besides which, how exactly
would this "absolutely forbidden" trick work?

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT