From: Gordon Worley (redbird@rbisland.cx)
Date: Sat Aug 04 2001 - 19:56:12 MDT
At 4:40 PM -0700 8/4/01, James Higgins wrote:
>I've got a question for you all about the Sysop scenario. Can
>individuals opt out? And if not, why not?
No. Quite simply, if someone can get outside the Sysop, they could
hose the Universe. Alice could say 'Sysop, let me out' and then the
Sysop lets her out. Then she proceeds to release grey goo on the
Universe and kill us all. It's all or nothing, as I think Eliezer
wrote a few posts ago.
>Further, why couldn't individuals come and go from the Sysop's
>sphere? Let's say I want to go check out Alpha Centauri. The sysop
>denies me this ability because that is outside her control. I leave
>the Sysop's domain, travel to Alpha Centarui, do my stuff and then
>return, reentering the Sysop's domain.
If you want to go somewhere, that's fine, but you are still going to
be under Sysop control. The Sysop is not just one process, but many
processes running in every bit of conputronium (this may not be how
it's actually implemented, but I think you get the idea that the
Sysop just is and is everywhere). So, you can go to Alpha Centarui,
it's just you're still not going to be able to violate the volition
of anyone else.
Actually, this brings up an interesting issue: what are the default
permissions for interacting with alien (viz. outside Sysop Space)
minds? I would think something like mode 700 to be safe (okay, so
only the other byte would have any meaning for an alien, but I had to
have an easy way to write it) and make adjustments per the alien's
requests. Also, I think this calls for an extra byte (an alien
byte). Oh, but if the Sysop arise from the AI, a mind in general, it
is probably the natural outcome of any Singularity attempt (at least
where Friendliness was implemented, because if it wasn't then we had
better get to the Singularity fast so we can figure out to stop the
hosing before it gets us).
Hmm, I think I'm going to come up with formal modes for use in Sysop
Space rather than saying 'something like Unix mode xxx'. I'll get
back to the list on this one shortly ...
>So, what are your thoughts?
I don't see all your concerns about not being able to get it right as
a problem. Basically, here is what the Sysop is going to do before
we upload anyone or move any AIs onto computronium (so maybe this is
actually what the Transition Guide might do):
- Develop a design for the Sysop (based on what I've written, what
Eli's written, what's on this list, and anything else interesting
that will surely come along).
- Refine the design using the coding heuristics (vis codic cortex?)
ve has developed.
- Develop a system (whatever it may actually be, probably something
like the Sysop, or maybe you might call it Unix for the Singularity).
- Stress test it in simulation (yes, this will be the same as the
real thing, the AI should have the ability to do this).
- After full stress testing, put a few (viz. millions of) copies of
Friendly AIs on it and have them try it out. Have them all try to
violation volition at once. Really test the thing out and see what
happens. Since they're Friendly AIs, they won't actually do anything
un Friendly, just pretend to (in a way that the Sysop can't tell the
difference from a real attempt).
- Finally, begin transitioning in minds. First, bring in the
(Friendly) AIs. Then, bring in known altruists (e.g. Eliezer). Then
bring in suspected altruists. Then bring in the rest. All along the
way, making sure that the system keeps working. The Friendliest ones
are put in first in case something goes wrong they won't hose the
universe.
If after all that there are still bugs, they probably won't be
catastrophic (if they are we're hosed, but we are if we don't have
the Sysop anyway). Actually, I should note that we may not be hosed
for sure, it's just that we most likely will be.
Hmm, this post gives me some good ideas, so I'm going to have to go
work on them.
Oh, finally, why we're at it, Eli mentioned wishing he'd named the
Sysop Scenario the Unix Scenario if only he'd been more PRish when he
came up with it. Well, I'd say only a handful of us have ever even
really thought about the Sysop Scenario, so now would be a good time
to change the name. It would probably be better not to write about a
sysop anyway, since there may not actually be any mind called the
Sysop (after all, it takes extra cycles for the AI to be able to use
the word 'I', for which there is no good reason for the Sysop to do
so, IMO), just a very intelligent system that protects the universe
from violation of volition and being toasted.
So, we have two proposals at the moment: Sysop and Unix. Any
others? My only concern with Unix is that we don't want to associate
what is actually going to happen too much with Unix, but then at the
moment is people thinking too much of a Sysop as a god controlling
their lives and refusing to let them live free (as if they get to
live free now ...).
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose redbird@rbisland.cx it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT