From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 15 2004 - 17:06:13 MDT
David K Duke wrote:
>> In the initial dynamic? Yesssssssssss! Because there's no way in
>> hell that your current conscious will is going to understand the
>> successor dynamic, much less construct it. No offense.
>
> Okay, I understand. But at the same time, I just don't want to transcend
> yet. And I don't think you have the right to tell me (or program
> something) to make me do that. I suppose that doesn't matter to you
> though if you've thought think far does it?
Actually, I also would currently say that I want to grow up slowly. I just
think that the decision as to whether the decision should be made by our
current volitions should be made by our extrapolated volitions.
You seem to be visualizing something considerably "off" from what I'm
suggesting - i.e., forced transcendence as an immediate direct consequence
of collective volition, some independent all-powerful being rather than a
transparent optimization process. Have you read the Collective Volition
document?
>> Will the successor dynamic do anything without your current conscious
>> will? My guess is, yes, that is what is
>> wise/benevolent/good-of-humanity etc., which is to say, if you thought
>> about it for a few years you would decide that that was the right
>> decision. Philip Sutton and some others seem to think that this is
>> not only unwise, it would be unwise for the successor dynamic to do
>> anything other than present helpful advice.
>
> Once there is a (virtually) all-powerful, benevolent being, what's the
> hurry to force us what to do? Do you know about a not-so-friendly AI
> traveling at light speed towards us or something?
Who said anything about hurrying to force anyone to do anything? In
"Collective Volition" I explicitly talked about going slow, albeit that,
too, was only a guess.
>> If you write an initial dynamic based on individual volition, then as
>> discussed extensively in the document, you are *pre-emptively*, on
>> the basis of your personal moral authority and your current human
>> intelligence, writing off a hell of a lot of difficult tradeoffs
>> without even considering them. Such as whether anyone fanatically
>> convinced of a false religion ever learns the truth, whether it is
>> theoretically possible for you to even try to take heroin away from
>> drug addicts, and whether infants grow up into humans or superinfants.
>> I do not say that I know the forever answer to these moral dilemmas.
>> I say I am not willing to make the decision on my own authority.
>
> When you undertake the creation of such a powerful being, you're already
> doing that!
No, I am handing the decision off to an abstract invariant that
extrapolates the collective volition of humankind to produce a secondary
dynamic that returns a decision. I don't know what the decision will be.
I just know that the collective volition of humankind, which is a set of
superposed spreads, will be satisficed by the choice of decision process
that is used to make the decision. Oh, and I know that the Last Judge
didn't throw the off switch on it.
>> No, not even for the sake of public relations. The public relations
>> thingy is a lost cause anyway.
>
> Well it surely is now. Do you think the politicians and military would
> just hand over their wills, even if you say their "future" selves would
> approve? There's a good chance this concept of yours could get some very
> bad press - and worse - governmental/police/military/whatever
> intervention.
That was true regardless of whether I proposed the Right Thing on
collective volition, or some hideously wrong hacked-up thing that sounded
slightly better from a public relations perspective.
>> Also, you must correctly define the "is-individual?" predicate in
>> "individual volition" exactly right on your first try, including the
>> whole bag of worms on sentience and citizenship, because if the
>> initial dynamic can redefine "is-individual?" using a majority vote,
>> you aren't really protecting anyone.
>
> I very much doubt, at least initially, humanity would vote to merge into
> a single Jupiter-brain (since 99.5%+ of them aren't Singularitarians),
> or whatever SL4 topic about sentience you wanna throw at me. It's
> similar to many of those improbable moral dilemmas which won't have any
> application in a real world.
I agree. Why did you think I wouldn't agree?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT