Re: ethics

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed May 19 2004 - 14:52:38 MDT


Christopher Healey wrote:
> John,
>
> My quote was truncated. it should read:
>
>> any black-box emergent-complexity solution is to be avoided >>>
>> almost without exception <<<
>
> The primary point I was supporting is that if you CAN choose, ALWAYS
> choose the more predictable path UNLESS the potential risk of NOT doing
> so is greater. Under a known race-to-singularity situation, it may be
> the more rational choice to trade off a relative amount of
> predictability for first-to-take-off status. This modifier to the
> rule, while valid, seems more likely to be used as an "end justifies
> means" rationalization by those who would act irresponsibly, so I'd be
> suprised if the SIAI focuses on that part of it in their pop campaign.

I would presently support the flat general rule that things which look
like minor problems, but which you don't quite understand, are blocker
problems until fathomed completely. Mostly because of the number of
things I have encountered which looked like minor problems, and which I
didn't quite understand, and which - as it turned out, after I learned the
rules - I desperately needed to understand.

I don't think there will be a good reason for using probabilistic
self-modification techniques, ever. Deductive self-modification should be
quite sufficient. There's a difference between hope and creating a system
that can be rationally predicted to work, and the difference is that hope
doesn't help.

The part about the "rational tradeoff" ignores the fact that until you
understand something, you have no idea how much you need to understand it;
you are simply guessing. Every time I see someone try to get away with
this guess, including my memories of my past self, they are lethally
wrong. To build an FAI you must aspire to a higher level of understanding
than poking around in design space until you find something that appears
to work.

I do not expect anyone who *actually* understands FAI to *ever* use the
argument of "We don't understand this, but we'll use it anyway because of
<nitwit utilitarian argument>." The nitwit argument only applies because
the speaker is too ignorant to realize that they have *no* chance of
success, that the *only* reason they think they can build an FAI without
understanding is that they lack the understanding to know this is impossible.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST