Re: JOIN: Alden Streeter

From: Alden Streeter (astreeter@msn.com)
Date: Mon Aug 26 2002 - 02:31:43 MDT


From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
> Thus Friendliness does not rely on the judgement that it is "unlikely"
> that human minds are attuned to a cosmic truth. It does not even rely on
> the judgement that reaching this cosmic truth requires starting with a
> goal system that, like humans, would care about a cosmic truth if it found
> one. It can rest on the idea that if human preconditions actually
> *prevent* the recognition of a cosmic truth, then, by hypothesis, the
> truth in question is one that you and I would not see as relevant to
> morality - if you construct a scenario where a relevant truth exists, then
> the simple fact that you see it as relevant means that human thinking
> doesn't block the perception of the relevance of that class of truths. In
> that sense, Friendliness is safe. If the moral thing to do is construct
> an AI with a blank slate, and the morality of this course of action is
> even in theory perceptible to humans, than the first Friendly AI can do
> the moral thing and build a blank-slate AI. Or if the cosmic truth
> underlying the human morality is universally accessible, Friendly AI may
> turn out to have been a waste of effort, but it won't actually have *hurt*
> anything.

So what you're saying then is that we may as well at least _try_ to design
the AI with our concept of Friendliness built-in, on the off-chance that we
just naturally happened upon the "right" meaning of Friendliness determined
entirely by our biological evolution; then if at some later time the
Friendly AI realizes that the idea of Friendliness that we gave it was
moronic to begin with, it should be free to alter or discard it?

Or are you saying the Friendly AI must be _absolutely forbidden_ from
altering it's human-programmed concept of Friendliness in any way that those
primitive humans might object to, however irrational those objections might
actually be to its vastly superior intellect?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT