RE: ethics

From: Chris Healey (chealey@unicom-inc.com)
Date: Wed May 19 2004 - 16:55:31 MDT


Please bear with me if I am being a bit thickheaded.

If I understand your point, it's that under such a race condition, the
success differential between having a non-functional take-off
capability and a functional take-off protocol patched with poorly
understood probabilistic modules (as minimizally necessary to win the
race) is likely to approach vanishingly small to zero? In other
words, you don't understand the actual problem you are attempting to
solve, and so you're orthogonal efforts are wasted. You will fail.

Hmm, I'm not sure I can disagree there, but also I'm not entirely sure
why I hesitate to agree. I sense my reluctance has something to do
with satisfying the illusion of control over one's fate. That's a big
red flag that I'll have give more attention to tracking.

It's probably a moot consideration anyhow, since taking such a
departure from such a core policy of avoidance would most likely
involve enough data to formulate a more specific response. If that
data was available at all; and not being available, would not really
be a part of the decision.

Am I even close?

Christopher Healey

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Wednesday, May 19, 2004 4:53 PM
> To: sl4@sl4.org
> Subject: Re: ethics
>
>
> Christopher Healey wrote:
> > John,
> >
> > My quote was truncated. it should read:
> >
> >> any black-box emergent-complexity solution is to be avoided >>>
> >> almost without exception <<<
> >
> > The primary point I was supporting is that if you CAN
> choose, ALWAYS
> > choose the more predictable path UNLESS the potential risk of NOT
> > doing so is greater. Under a known race-to-singularity
> situation, it
> > may be the more rational choice to trade off a relative amount of
> > predictability for first-to-take-off status. This modifier to the

> > rule, while valid, seems more likely to be used as an "end
> justifies
> > means" rationalization by those who would act
> irresponsibly, so I'd be
> > suprised if the SIAI focuses on that part of it in their
> pop campaign.
>
> I would presently support the flat general rule that things
> which look
> like minor problems, but which you don't quite understand,
> are blocker
> problems until fathomed completely. Mostly because of the number of

> things I have encountered which looked like minor problems,
> and which I
> didn't quite understand, and which - as it turned out, after
> I learned the
> rules - I desperately needed to understand.
>
> I don't think there will be a good reason for using probabilistic
> self-modification techniques, ever. Deductive
> self-modification should be
> quite sufficient. There's a difference between hope and
> creating a system
> that can be rationally predicted to work, and the difference
> is that hope
> doesn't help.
>
> The part about the "rational tradeoff" ignores the fact that
> until you
> understand something, you have no idea how much you need to
> understand it;
> you are simply guessing. Every time I see someone try to get
> away with
> this guess, including my memories of my past self, they are lethally

> wrong. To build an FAI you must aspire to a higher level of
> understanding
> than poking around in design space until you find something
> that appears
> to work.
>
> I do not expect anyone who *actually* understands FAI to
> *ever* use the
> argument of "We don't understand this, but we'll use it
> anyway because of
> <nitwit utilitarian argument>." The nitwit argument only
> applies because
> the speaker is too ignorant to realize that they have *no* chance of

> success, that the *only* reason they think they can build an
> FAI without
> understanding is that they lack the understanding to know
> this is impossible.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST