Re: Sysops, volition, and opting out

From: Gordon Worley (
Date: Mon Aug 06 2001 - 17:50:50 MDT

At 4:52 PM -0600 8/6/01, John Stick wrote:
>Some people won't like the sysop for just this reason. They were hoping the
>singularity would drive governments to extinction. But using an innocuous
>term like Unix scenario will not fool many people, and it will keep you from
>squarely addressing the need for the sysop (if need there be), the
>protections it can give, and the ways it can be made friendly.

The Sysop is needed, Unix Reality is just a clarification of how the
system will likely work. After all, most of the time the Sysop
probably won't need to do anything intelligent to tell Alice she
can't kill Bob and the like. But, I agree with you, there is a need
to talk about the Sysop, it's just that since the Sysop won't be
responsible for a lot of the infrastructure (at least under the model
I'm currently proposing) the two should be separated to avoid

>Using protection of volition as the moral underpinning of the sysops
>activity is fair enough, so long as it is understood as a gesture in a
>general direction, rather than anything more specific. Both Kant and Mill
>can be understood as using protection of volition as the foundation of their
>moral theories, but they differ on many issues. (Even Ayn Rand might be put
>in that group, if one thought her writings were rational, moral, or
>philosophical. I don't, but some here do, and agreements on practical
>issues among these three will not be all that plentiful.) Gordon Worley's
>attempt to diffuse situations where volitions conflict by using an
>active/passive distinction (doesn't that capture the "doing/done to"
>language?) does as well as most attempts: it solves some but not nearly all
>situations (at the cost of changing a protection of volition theory to a
>protection of justifiable volition theory where the "justifiable" does much
>of the work). But hey, if thinking through morality were that easy, we
>would have one less reason for developing more than human intelligence.

What kinds of situations does it not cover? It seems pretty good to
me, but then maybe we have a different outlook on this. I've gone to
looking at it like this: not being made to do something you don't
want to is a right, being able to do what you want is a privilege.
If what you want to do is wrong qua non violation of volition, then
you can't do it. Basically, by making the moral non violation of
volition rather than protection of volition I free myself from having
to worry about conflicts. That's why I'm always careful to write
'non violation of volition'.

Oh, just thinking of this, a rather sticky situation:

Alice and Bob are put in execution seats (I don't care how, but the
seats can kill them). Both have a button that will kill the other
and let the pusher go free. If, after 20 minutes, neither button has
been pushed, both will die. What does the Sysop do? If we're lucky,
Bob says he'll die, so the Sysop let's Alice kill him. But, if
neither wants to die, what to do? But, then again, the Sysop would
keep such a situation from arising to begin with. But, assuming no
Sysop, just that non violation of volition is The Moral of the
universe and someone violated it to put Bob and Alice in this
position (or tricked them)? Maybe both push the buttons at the same
time and try to short the system out?

Or, maybe the solution is 'all the more reason for backups'. :-)

Gordon Worley                     `When I use a word,' Humpty Dumpty            said, `it means just what I choose                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT