From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 04 2004 - 14:28:06 MDT
Ben Goertzel wrote:
>
> Eliezer,
>
> If you trust human judgment so little -- even your own -- then how can
> you trust yourself and the other SIAI insiders to make correct judgments
> about when the collective-volition-embodying uber-AI is operating
> correctly and when it isn't?
I don't. That's why on the technical side I'm cackling fearsomely and
plotting such awesome safeguards as the human mind can scarcely conceive,
and scheming to extort an exact description of verification requirements
from FAI theory. The key is to make it a technical issue on which success
is theoretically possible, rather than a guaranteed moral failure. Or, if
you like, to move as far as possible in the direction of transforming such
things as "guarding against Singularity regret" into technical issues. For
this is a challenge that lies within the art of one who would make minds,
or mindlike processes. That's the moral side.
On the technical side, I think it is possible to use the Friendly Thingy to
augment your ability to keep the Friendly Thingy on track. But only if you
know exactly how to describe what you are doing and exactly why you are
doing it, so no one should get hopeful about this avoiding the need for
humans to solve the basic theoretical problems. That is a bootstrapping
argument that only sounds plausible if you don't know enough of the rules
to know that knowing the rules is required. It looks to me like if one can
solve the theory, it then becomes possible to use the theory to make it
realistically, humanly possible to solve the problem in practice. But it
does *not* look possible to ask an Artificial Intelligence to help you
solve the theoretical FAI problem if you built that Artificial Intelligence
using only guesses at the theoretical FAI solution, unless by unbelievable
luck you got the guesses exactly right without knowing any of the
theoretical criteria of rightness. I find it hard to visualize an expected
utility maximizer that will tell you the problems with maximization, unless
it was designed in a certain exactly right way that implies you already
know the problems with maximization.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT