From: Eliezer S. Yudkowsky (email@example.com)
Date: Sun Feb 04 2001 - 21:48:41 MST
Samantha Atkins wrote:
> "Eliezer S. Yudkowsky" wrote:
> > Advice - freely offered, freely rejected.
> What does it mean to reject the advice of a Being that controls all the
> material local universe to a very fine level
1: Sysop (observes): Samantha's volitional decision is that she would
like me to offer advice as long as I don't use persuasive methods that
'force' her decision - that is, use persuasive methods that are powerful
enough to convince her of false things as well as true things.
2: Sysop: "You know, Samantha, Sysops aren't such bad things."
3: Samantha: "I disagree!"
4: Sysop: "OK."
5: Samantha: "I will start a revolutionary committee to overthrow the
6: Sysop: "OK."
7: Samantha: "Please put out a notice to that effect on the public
8: Sysop: "OK."
9: Samantha: "Please construct a SysopKiller device using matter which I
10: Sysop: "OK."
11: Samantha: "Fire the SysopKiller at this test target so I can see how
12: Sysop: "OK."
13: Samantha: "Fire the SysopKiller at yourself."
14: Sysop: "API error."
> and will not allow
> disagreement that leads to possible actions that it decides are possibly
> harmful to the sentiences in its care? Where is the freedom? I see
> freedom to disagree but not to fully act on one's disagreement?
The Sysop rules won't allow you to kill someone without vis permission.
You can advocate killing people to your heart's content.
> > Build another SI of equal intelligence - sure, as long as you build ver
> > inside the Sysop.
> What for? That would rather defeat the purpose of having more than one
> local Entity of such power. A single entitity is a single point of
> failure of Friendliness and a great danger.
Multiple entities are multiple points of failure of Friendliness and even
A failure of Friendliness in a transcending seed AI results in a total
takeover regardless of what a Friendly AI thinks about the Sysop
Scenario. Once an AI has *reached* the Sysop point you're either screwed
or saved, so forking off more Sysops after that is a particularly
> > Build an Ultraweapon of Megadeath and Destruction so you can see how it
> > works - sure, as long as there's a bit of Sysop somewhere inside the
> > trigger making sure you don't point it at the Amish communities on Old
> > Earth.
> Building ultra-weapons is not at all the point or anything I intend.
So what *do* you intend that you can't do with a Sysop? No tautological
answers like "Build something outside the Sysop"; name some specific fun
or meaningful thing that you should, morally, be able to do, but which the
Sysop won't let you do.
> > Build an Ultraweapon that you can aim anywhere, with no Sysopmatter
> > (visible or not) anywhere near it - you might still be able to get away
> > with this, as long as the Sysop can predict the future with total
> > certainty and predict that you'll never abuse the Ultraweapon, regardless
> > of any external influences you encounter. Probably no human, even Gandhi,
> > is subject to this prediction, but an uploaded Gandhi turned transhuman
> > might be.
> Again, the Sysop abrogates all decisions and all wisdom to itself. How
> about upgrading its uploads to their own ever-increasing wisdom.
15: Sysop: "OK."
> > Under absolutely none of these circumstances does the Sysop need to strike
> > back at you. Ve just gives you an API error.
> Err. Already assumes precisely my point. This being is effectively
> God. You exist only within it and as it allows. Are you really willing
> to take on the building of such? Are you so convinced it is the Only
Yes, and yes. The risks inherent in material omnipotence are inherent in
rapid transcendence and thus inherent in AI. The Sysop Scenario adds
nothing to that.
And, incidentally, the Sysop Scenario isn't what I'm telling a Friendly
seed AI to do. It's what I think a Friendly superintelligence will decide
to do. I'm neither taking on the building of such, nor need to be
convinced that it's the Only Answer.
> > For all I know, it's entirely okay to fork off and run under your own
> > Sysop as long as that Sysop is also Friendly. (People who chime in about
> > how this would dump us into a Darwinian regime may take this as an
> > argument against Sysop splitting.) The static uploads may even form their
> > own polises with different operating systems and rules, with the
> > underlying Sysop merely acting to ensure that no citizen can be trapped
> > inside a polis.
> But this Sysop can't be built by your earlier response except totally
> within the Sysop so in no real sense is it independent.
No, I'm pointing out a possible variation on my earlier response (albeit
one that I personally think improbable), under which it's possible to
construct an independent Sysop as long as it's an independent Friendly
> I am concerned
> by the phrase "static uploads". Do you mean by this that uploads cannot
> grow indefinitely in capability?
No, I mean modern-day humans who choose to upload but not to upgrade.
> > This brings up a point I keep on trying to make, which is that the Sysop
> > is not a ruler; the Sysop is an operating system. The Sysop may not even
> > have a public personality as such; our compounded "wishes about wishes"
> > may form an independent operating system and API that differs from citizen
> > to citizen, ranging from genie interfaces with a personality, to an Eganic
> > "exoself", to transhumans that simply dispense with the appearance of an
> > interface and integrate their abilities into themselves, like motor
> > functions. The fact that there's a Sysop underneath it all changes
> > nothing; it just means that your interface (a) can exhibit arbitrarily
> > high levels of intelligence and (b) will return some kind of error if you
> > try to harm another citizen.
> Let's see. The SysOp is a super-intelligence. Therefore it has its own
> agenda and interests.
> It controls all aspects of material reality and
> all virtual ones that we have access to.
> This is a good deal more than
> just an operating system.
Why? The laws of physics control all aspects of material reality too.
> What precisely constitutes harm of another
> citizen to the Sysop?
Each citizen would define the way in which other entities can interact
with matter and computronium which that citizen owns.
> For entities in a VR who are playing with
> designer universes of simulated beings they experience from inside, is
> it really harm that in this universe these simulated beings maim and
> kill one another? In other words, does the SysOp prevent real harm or
> all appearance of harm? What is and isn't real needs answering also,
I don't see how this moral issue is created by the Sysop Scenario. It's
something that we need to decide, as a fundamental moral issue, no matter
which future we walk into.
> > Yep. Again, for static uploads, the Sysop won't *necessarily* be a
> > dominant feature of reality, or even a noticeable one. For sysophobic
> > statics, the complexity of the future would be embedded entirely in social
> > interactions and so on.
> If it is present at all it will be noticeable except for those who
> purposefully choose to design a local space where they do not see it.
Yes, that's right.
> > Of course not. You could be right and I could be wrong, in which case -
> > if I've built well - the Sysop will do something else, or the seed AI will
> > do something other than become Sysop.
> OK. If it is not the Sysop what are some of the alternate scenarios
> that you could see occurring that are desirable outcomes?
1) It turns out that humanity's destiny is to have an overall GroupMind
that runs the Solar System. The Sysop creates the infrastructure for the
GroupMind, invites everyone in who wants in, transfers control of API
functions to the GroupMind's volition, and either terminates verself or
joins the GroupMind.
2) Preventing citizens from torturing one another doesn't require
continuous enforcement by a sentient entity; the Sysop invokes some kind
of ontotechnological Word of Command that rules out the negative set of
possibilities, then terminates verself, or sticks around being helpful
until more SIs show up.
> > Yes. I think that, if the annoyance resulting from pervasive forbiddance
> > is a necessary subgoal of ruling out the space of possibilities in which
> > citizenship rights are violated, then it's an acceptable tradeoff.
> If the citizens have no choice then there is no morality.
That sounds to me like one more variation on "It's the struggle that's
important, not the goal." What's desirable is that people not hurt one
another. It's also desirable that they not choose to hurt one another,
but that's totally orthagonal to the first point.
You can still become a better person, as measured by what you'd do if the
Sysop suddenly vanished.
Are we less moral because we live in a society with police officers?
Would we suddenly become more moral if all law enforcement and all social
disapprobation and all other consequences of murder suddenly vanished?
> There is only
> that which works by the Sysop's rules and that which does not. In such
> a universe I see little impetus for the citizens to evolve.
11: Johnny: "This is my thought sculpture."
12: Samantha: "It sucks."
21: Eliezer: "This is my thought sculpture."
22: Eliezer: "It sucks."
> > Please note that in your scenario, people are not all free free free as a
> > bird. In your scenario, you can take an extended vacation from Sysop
> > space, manufacture a million helpless sentients, and then refuse to let
> > *them* out of Samantha space. You can take actions that would make them
> > *desperate* to leave Samantha space and they still won't be able to go,
> > because the Sysop that would ensure those rights has gone away to give you
> > a little personal space. I daresay that in terms of the total integral
> > over all sentients and their emotions, the Samantha scenario involves many
> > many more sentients feeling much more intense desire to escape control.
> The Sysop is refusing to let me out of Sysop space. Truthfully we have
> no idea how various sentiences will react to being in Sysop space no
> matter how benign you think it is. Your hypothetical space where I
> torture sentients is an utter strawman.
Is it still a strawman scenario when integrated over the six billion
current residents of Earth? Or is only Samantha allowed to go Outside?
The Friendly seed AI turned Friendly superintelligence makes the final
decision, and ve *does* have an idea of how various sentiences will
react. If the Sysop scenario really results in more summated misery than
letting every Hitler have vis own planet, or if there's some brilliant
third alternative, then the Sysop scenario will undoubtedly be quietly
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT