From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 15 2004 - 14:39:38 MDT
David K Duke wrote:
> Eliezer wrote:
>
>> This is *not* what I want. See my answer to Samantha about the hard
>> part of the problem as I see it. I want a transparent optimization
>> process to return a decision that you can think of as satisficing the
>> superposition of probable future humanities, but that actually
>> satisfices the superposition of extrapolated upgraded
>> superposed-spread present-day humankind.
>
> So will the individual have a conscious, present choice in whether to
> take part in this "optimization".
>
> Lets say SingInst becomes hugely popular, and a reporter asks you about
> the AI: "So what's this bibbed computer do anyway?"
>
> You: "Well, it's going to [insert esoteric choice of words here
> describing volition]"
>
> Reporter: "Doesn't that mean it's gonna do stuff without our presently
> conscious approval"
>
> You: "Yesssssssssss!"
>
> Reporter "Run for your lives and insurance company!"
>
> ******
>
> Basically what I'm asking is, will it do this without my current
> conscious will? Yes or no?
In the initial dynamic? Yesssssssssss! Because there's no way in hell
that your current conscious will is going to understand the successor
dynamic, much less construct it. No offense.
Will the successor dynamic do anything without your current conscious will?
My guess is, yes, that is what is wise/benevolent/good-of-humanity etc.,
which is to say, if you thought about it for a few years you would decide
that that was the right decision. Philip Sutton and some others seem to
think that this is not only unwise, it would be unwise for the successor
dynamic to do anything other than present helpful advice.
If you write an initial dynamic based on individual volition, then as
discussed extensively in the document, you are *pre-emptively*, on the
basis of your personal moral authority and your current human intelligence,
writing off a hell of a lot of difficult tradeoffs without even considering
them. Such as whether anyone fanatically convinced of a false religion
ever learns the truth, whether it is theoretically possible for you to even
try to take heroin away from drug addicts, and whether infants grow up into
humans or superinfants. I do not say that I know the forever answer to
these moral dilemmas. I say I am not willing to make the decision on my
own authority. No, not even for the sake of public relations. The public
relations thingy is a lost cause anyway. Also, you must correctly define
the "is-individual?" predicate in "individual volition" exactly right on
your first try, including the whole bag of worms on sentience and
citizenship, because if the initial dynamic can redefine "is-individual?"
using a majority vote, you aren't really protecting anyone.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT