Re: ethics

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri May 21 2004 - 14:48:40 MDT


Samantha Atkins wrote:
>
> On May 19, 2004, at 3:56 PM, Eliezer S. Yudkowsky wrote:
>
>> Similarly, FAI doesn't require that I understand an existing
>> biological system, or that I understand an arbitrarily selected
>> nonhuman system, but that I build a system with the property of
>> understandability. Or to be more precise, that I build an
>> understandable system with the property of predictable
>> niceness/Friendliness, for a well-specified abstract predicate
>> thereof. Just *any* system that's understandable wouldn't be enough.
>
> You propose to give this system constrained to be understandable by
> yourself the power to control the immediate space-time area in service
> of its understandable goals? That is a lot of power to hand something
> that is not really a mind or particularly self-aware or reflective.

Completely reflective, and not self-aware in the sense of that which we
refer to as "conscious experience". (Remember, this may look like a
mysterious question, but there is no such thing as a mysterious answer.)

> If I understand you correctly I am not at all sure I can support such
> a project. It smacks of a glorified all-powerful mindless coercion
> for "our own good".

Yes, I understand the danger here. But Samantha, I'm not sure I'm ready
to be a father. I think I know how to redirect futures, deploy huge
amounts of what I would consider to be intelligence and what I would
cautiously call "optimization pressures" for the sake of avoiding
conversational ambiguity. But I'm still fathoming the reasons why
humans think they have conscious experiences, and the foundations of
fun, and the answers to the moral questions implicit in myself. I feel
myself lacking in the knowledge, and the surety of knowledge, needed to
create a new sentient species. And I wistfully wish that all humankind
should have a voice in such a decision, the creation of humanity's first
child. And I wonder if it is a thing we would regard as a loss of
destiny, to be rescued from our present crisis by a true sentient mind
vastly superior to ourselves in both intelligence and morality, rather
than a powerful optimization process bound to the collective volition of
humankind. There's a difference between being manifesting the
superposed extrapolation of the decisions humankind would prefer given
sufficient intelligence, and being rescued by an actual parent.

If I can, Samantha, I would resolve this present crisis without creating
a child, and leave that to the future. I fear making a mistake that
would be terrible even if remediable, and I fear exercising too much
personal control over humankind's destiny. Perhaps it is not possible
even in principle, to build a nonsentient process that can extrapolate
the volitions of sentient beings without ever actually simulating
sentient beings to such a degree that we would see helpless minds
trapped inside a computer. It is more difficult, when one considers
that constraint. One cannot brute-force the problem with a training set
and a hypothesis search, for one must understand enough about sentience
to rule out "hypotheses" that are actual sentient beings. The added
constraint forces me to understand the problem on a deeper level, and
work out the exact nature of things that are difficult to understand.
That is a good thing, broadly speaking. I find that much of life as a
Friendly AI programmer consists in forcing your mind to get to grips
with difficult problems, instead of finding excuses not to confront them.

I am going to at least try to pursue the difficult questions and do this
in the way that I see as the best possible, and if I find it is too
difficult *then* I will go back to my original plan of becoming a
father. But I have learned to fear any clever strategy for cheating a
spectacularly difficult question. Do not tell me that my standards are
too high, or that the clock is ticking; past experience says that
cheating is extremely dangerous, and I should try HARD before giving up.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST