Re: Summary of current FAI thought

From: Eliezer Yudkowsky (
Date: Sat Jun 05 2004 - 02:46:53 MDT

Samantha Atkins wrote:
> Presumably it only has current knowledge of the subjects involved and
> this without full sentient being referents. The knowledge to date and
> any likely extrapolation of such knowledge in say the next decade is
> likely to not be up to the task. Extrapolations from the FRPOP without
> it understanding anything about qualia or knowing sentience itself are
> likely to be extremely dangerous to human well-being.

I did not say it would not understand "qualia". The FRPOP has to
understand "qualia" so as to create zombies instead of real people when it
simulates humans... or something. As I said, I confess I'm not entirely
certain how this should work, but I still don't think that the characters
in "One Over Zero" are genuine people, even though they ponder their own
existences. It's just Tailsteak imagining them pondering their own existences.

We're not talking about a limitation on power but a difference in architecture.

>> That's phrased too negatively; the volition could render first aid,
>> not only protect - do whatever the coherent wishes said was worth
>> doing, bearing in mind that the coherent wish may be to limit the work
>> done by the collective volition.
> But wouldn't the FRPOP only honor a coherent wish that allowed it to
> fulfill its primary directive of protecting/serving humans?

The "primary directive" is the collective volition, not protecting/serving
humans or anything else the programmers would need to get morally right on
exactly the first try.

> If the
> collective coherent wish was for the FRPOP to bug out, would it do so?

Yes. That's the *whole point*.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT