A Sysop alternative

From: Gordon Worley (redbird@rbisland.cx)
Date: Mon Apr 09 2001 - 00:45:43 MDT

Okay, sorry if we've touched on this before or if it's covered
elsewhere, but I couldn't remember doing so, which leads me to bring
this up now. If it's all been done before, just post some links and
I'll come back if there is something to respond to therein.

My thought is that we don't need a Sysop, but an alternative that
will have the same effect. See, my problem with the Sysop is that it
is an actual intelligence. I see this as a problem because it should
act the same way as all other SIs will. In that case, the Sysop is
going to have to be pretty damn bug free to work. Sure, all code has
bugs and they get fixed, but the Sysop is given enough power that one
little bug could cause major problems. Thus, in my opinion, we need
to change how the system works by not creating an intelligence to
take care of matters of morality (because the Sysop is enforcing
morality through the ethics of Friendliness) and instead work on
implimenting morality directly.

But first, and introduction to where I'm coming from. :-)

My thinking stems from natural laws. The quick run down is that
natural laws make morality a facet of the nature of a system. Lions
have certain morals because their nature requires it. Now, humans
are a funny animal. Unlike other animals, for whom morals == natural
laws, humans can use their intelligence to create artificial morals.
The more intelligent we get, the stronger (therefore, worse) the
artificial morals. Much of the trouble of the world today can be
ascribed to such artificial morals that have supplanted the natural

For those interested, humans are naturally selfish, just like all
other animals, and everything done is in respect to that (well, at
least when not supplanted). Respect for the desires of others is
another (Eliezer has a good v word for this that I keep forgetting),
which is really an outgrowth of selfishness but I figured that I
should point out for the list's benefit. Also, humans have a concept
of property and is naturally highly respected, but it has been
supplated by plenty of societies with leaders who want to violate
your property. Also, in some societies, a lack of scarcity was
experienced for long enough to abandon the idea of property (since
scarcity and better survival are reasons for having property).

Also, just to clear up one more thing before I go on, stop thinking
of selfishness as being the same thing as being greedy. I've written
at length about this subject in other forums, but the short of it is
that greediness looks at the gross benefit while selfishness looks at
the net benefit. Maybe I've written about it here before, too, but
I'm just trying to cover all the bases before I move on.

Now, back to what I really was posting about. If you have questions
about the above, let's keep them off list unless they somehow
directly effect my proposal below. They were meant for your benefit
so that you could understand the basic assumptions that I made to
come to my line of reasoning.

Now, what I propose we do is that, rather than creating a Sysop,
program into the system morals as natural laws and make them be
enforced. In other words, the system actively makes it not possible
to act outside of what is moral by the natural laws. How will we
figure those laws out? Why, we have some AIs around first that are
SIs and are more or less Friendly and maybe some post humans that we
trust will not set up their own morals or drag along their human
ones, let them live in a system for a while, figure out what the
naturaly emergent laws are, and then impliment them at the most basic
levels of the system. Then, everyone who wants to use the Earth
created Singulartiy technology will have to do so based on what we
determine to be the natural laws. No more allowing the intelligent
to break their own laws, we just make it impossible. I realize
already that most of the complaints about the Sysop (SIs outside of
ver control, bugs, et al.), but it is free from some others, like the
Sysop's potential to abuse power, ver limited ability to enforce
morality (after all, there's only so much Sysop to go around), ver
need to be watched to ensure that there are not bugs and the ability
to decide that there is a bug, etc..

Right now I feel like I have a piece of swiss with a few small holes
(though it is late; look at the time stamp), but maybe you can open
up some larger ones.

Also, I've been thinking. Back when the industrial revolution
started, many people looked at the new technology and wondered how
the world would ever work unless the state stepped in and managed
things (thus the rise in the popularity of socialism around these
times in various locations). Today, it is easy to see how our
industrial and now informational society could work without state
intervention, but the future looks uncertain. It seems again that
the state (or our equivilant of it) will have to step in and regulate
to make things work out alright. Considering the past, I have to
wonder if we'll see the same thing again in the future or if these
technologies really are going to be capable of destroying everything
to the extent that what bit of natural laws survive on their own will
not be enough to prevent the End of Everything.

I have my own ideas which should be pretty obvious, but I need some
feedback. Maybe I'm missing something and I just need it pointed out.

Well, that's enough for tonight. I'm tired. I'll go to bed and
worry about this in the morning when the rest of the list is awake
and has had a chance to respond. :-)

Good night.

Gordon Worley
PGP Fingerprint:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT