Re: singularitarian premises

From: Michael Roy Ames (
Date: Wed Jan 26 2005 - 09:06:40 MST

Daniel Radetsky:

You wrote:

1. In developing FAI, some form of Last Judge/AI Jail will be used.

2. ...[The] judge will have sole authority to determine whether an AI is
sufficiently friendly/correctly programmed/whatever he's supposed to

3. ...[O]nce the AI begins to self improve, we have to evaluate
of friendliness on the basis of its actions.

The Last Judge concept is in the plan AFAIK, but begrudgingly so because it
is such a kludge. The Last Judge would determine the desirability of an
outcome, not friendliness. If someone has a better idea of how to evaluate
an outcome without influencing it (Daniel?) then I'd be glad to hear it.

The AI Jail concept will probably be used in the sense of "running on
computers that are not connected to the network" but not in the sense of "we
will see how it behaves before we let it out". The first sense is useful in
that it provides a minimum level of protection against code 'escaping' or
being stolen. The second sense is not useful because it suggests that AI
evaluation by humans can be done the same way as human evaluation by humans,
which is false for the type of AI we are attempting to build.

AI evaluation will use many methods of observation, all of which boil down
to observing actions. Determining a 'degree' of friendliness will involve a
complex technical procedure of comparison against ideal data & models -
something that could only be done by a team of experts with a thorough
understanding of the models. My guess is that it will take a *lot* of time
and effort.

Michael Roy Ames

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT