From: Samantha Atkins (samantha@objectent.com)
Date: Wed May 19 2004 - 23:42:22 MDT
On May 19, 2004, at 1:52 PM, Eliezer S. Yudkowsky wrote:
> Christopher Healey wrote:
>> John,
>> My quote was truncated. it should read:
>>> any black-box emergent-complexity solution is to be avoided >>>
>>> almost without exception <<<
>> The primary point I was supporting is that if you CAN choose, ALWAYS
>> choose the more predictable path UNLESS the potential risk of NOT
>> doing
>> so is greater. Under a known race-to-singularity situation, it may be
>> the more rational choice to trade off a relative amount of
>> predictability for first-to-take-off status. This modifier to the
>> rule, while valid, seems more likely to be used as an "end justifies
>> means" rationalization by those who would act irresponsibly, so I'd be
>> suprised if the SIAI focuses on that part of it in their pop campaign.
>
> I would presently support the flat general rule that things which look
> like minor problems, but which you don't quite understand, are blocker
> problems until fathomed completely. Mostly because of the number of
> things I have encountered which looked like minor problems, and which
> I didn't quite understand, and which - as it turned out, after I
> learned the rules - I desperately needed to understand.
>
If you really follow this approach I do not believe you will ever build
a working SAI. There are problems involved and complexities of
possible solutions as they unfold over time that no human brain, not
even yours, is capable of fathoming completely. I am quite certain
you know this.
> I don't think there will be a good reason for using probabilistic
> self-modification techniques, ever. Deductive self-modification
> should be quite sufficient. There's a difference between hope and
> creating a system that can be rationally predicted to work, and the
> difference is that hope doesn't help.
>
> The part about the "rational tradeoff" ignores the fact that until you
> understand something, you have no idea how much you need to understand
> it; you are simply guessing. Every time I see someone try to get away
> with this guess, including my memories of my past self, they are
> lethally wrong. To build an FAI you must aspire to a higher level of
> understanding than poking around in design space until you find
> something that appears to work.
>
Given the capacity of the human mind there is no way not to make
educated guesses beyond a certain point. That point comes more quickly
than any of us would like. We must aspire to the highest level of
understand we can achieve. But there is no sense lying to ourselves
that we can always take the time to fully understand or that any
conceivable amount of time will bring full understanding of all the
problems given our mental limitations.
> I do not expect anyone who *actually* understands FAI to *ever* use
> the argument of "We don't understand this, but we'll use it anyway
> because of <nitwit utilitarian argument>." The nitwit argument only
> applies because the speaker is too ignorant to realize that they have
> *no* chance of success, that the *only* reason they think they can
> build an FAI without understanding is that they lack the understanding
> to know this is impossible.
>
You are painting yourself into a corner.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT