From: Samantha Atkins (samantha@objectent.com)
Date: Thu Jun 17 2004 - 00:03:40 MDT
On Jun 16, 2004, at 10:51 AM, Eliezer Yudkowsky wrote:
> Brent Thomas wrote:
>> Again I'd like to express the hope that any F(AI) developers would
>> build
>> into their systems (as a fundamental invariant?) the 'right of
>> withdrawal'
>> This should not be part of a 'bill of rights' as it is so fundamental
>> to
>> having an acceptable process that it should be a basic condition.
>
>
So, if you fuck it up and the EV is grossly inaccurate then humanity is
basically eternally screwed. And this is supposed to be safer than
creating a fully conscious SAI how? The Judge of Last Resort must
somehow be the future persons who must live under this thing. Else it
is a the biggest one time gamble in history.
>> No
>> matter what the collective thinks is best, even if it has (correctly!)
>> extrapolated my wishes or the wishes of the collective, it should
>> still
>> not apply that solution to my (or any sentients) physical being
>> without
>> my express approval.
>
> Including human infants, I assume. I'll expect you to deliver the
> exact, eternal, unalterable specification of what constitutes a
> "sentient" by Thursday. Whatever happened to keeping things simple?
This is not required. Being able to shut down the optimization if it
gets wildly out of hand is required. If this can't be done why on
earth would anyone trust you or a hundred persons equally bright and
with excellent intentions to not fuck it up drastically?
>
>> Change the environment, alter the systems, create the transcendent
>> utopia but do it with 'choice' and as such do not modify my
>> personality
>> or physical being (and as part of that be prepared to create
>> 'enclaves'
>> for those who wish to remain unmodified) without the express consent
>> of
>> the sentient to be modified.
>
> Could you please elaborate further on all the independent details you
> would like to code into eternal, unalterable invariants? If you add
> enough of them we can drive the probability of them all working as
> expected down to effectively zero. Three should be sufficient, but
> redundancy is always a good thing.
Not at all. Make sure everyone is backed up at all times and give them
free choice but with nudges/reminders/more grown-up advise at whatever
level each one is willing/desirious of taking. Set things up so they
can't do themselves in ultimately (although it make look like they can
to them). Keep it going long enough for each to grow up.
>
>> Do this and I think the vision of the coming singularity will be more
>> palatable for all humanity.
>
> It's not about public relations, it's about living with the actual
> result for the next ten billion years if that wonderful PR invariant
> turns out to be a bad idea.
Instead we live with an original implementation of EV extraction and
decision making for the next 10 billion years without any possibility
of a reset or out? Hmm.
>
>> (and besides I can't really object about modifications if I was
>> consulted now can I?)
>
> Not under your system, no. I would like to allow your grownup self
> and/or your volition to object effectively.
But that being exists only within the extrapolation which may in fact
be erroneously formulated.
>
>> Do not tell me that 'oops we got it wrong...' as indicated here:
>>>> The reason may be, "That idiot Eliezer screwed up the extrapolation
>>>> dynamic." If so, you got me, there's no defense against that.
>>>> I'll try not to do it.
>> Instead (using the principal of no modification to sentients without
>> express permission) the system can tell me "Hey, you'd be much
>> happier
>> if you had green hair, we've done some calculations and if at least
>> 20%
>> of the population had green hair then there would be a 15% reduction
>> in
>> the general unhappiness quotient... Can I make this modification to
>> you
>> or would you like a deeper explanation of the intents and
>> consequences?"
>
> I suppose that if that is the sort of solution you would come up with
> after thinking about it for a few years, it might be the secondary
> dynamic. For myself I would argue against that, because it sounds
> like individuals have been handed genie bottles with warning labels,
> and I don't think that's a good thing.
>
Genie bottle with warning labels AND the ability to recover from errors
wouldn't be so bad.
>
> The title of this subject line is "fundamentals". There is a
> fundamental tradeoff that works like this: The more *assured* are
> such details of the outcome, even in the face of our later
> reconsideration, the more control is irrevocably exerted over the
> details of the outcome by a human-level intelligence. This holds
> especially true of the things that we are most nervous about. The
> more control you take away from smarter minds, for the sake of your
> own nervousness, the more you risk damning yourself. What if the
> Right of Withdrawal that you code (irrevocably and forever, or else
> why bother) is the wrong Right, weaker and less effective than the
> Right of Withdrawal the initial dynamic would have set in place if you
> hadn't meddled?
>
Not unless you design the system so that eternal damnation is possible!
Without the ability to opt-out or try something different or recover
from any and all errors eternal damnation will always be possible.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT