Re: Envisioning sysop scenarios Re: Universal Uplift as an alternative to the Sysop scenario

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Mar 22 2001 - 08:01:00 MST


At 01:48 PM 3/21/2001 -0500, Brian Atkins wrote:
>Dale Johnstone wrote:
> > Brian wrote:
> > >Is it possible to have other scenarios where the sysop does not infect
> > >all the mass in the solar system, while still ending all evil? I think it
> > >could be done through heavy surveillance, including both real and virtual
> > >realities. But this would be more dangerous IMO since if someone escapes
> > >the surveillance, and builds a competitor SI that then infects all the
> matter
> > >then you've got problems.
> >
> > I've never been completely happy with the SysOp scenario. Having to
> micro-manage (nano-manage?) the entire solar system is terribly wasteful.
> Watching out for silly people playing chicken on nukes, trying to keep
> them safe is just plain crazy. Let them do that in a VR or don't enable
> them do that at all. They never had the 'right' to survive nuclear
> explosions to begin with, so it's not like we're taking anything away.
>
>Well I think in the after-Singularity era you have to turn such questions
>around.. you have to ask, if we can do something like end all evil, why not
>do it? The only reason I see not to do it is if a) the entities (SIs) capable
>of doing it are unwilling to spend their time doing it, or b) it would take
>too much of the solar system's resources.

Or the big one that everyone seems to keep missing. If the only way to
accomplish it is to severely impinge freedom it should not be done. I
imagine the desire to have free will won't change, even for SIs. Heck, it
may be much more the case; what's the point in being able to do anything if
you're not allowed to do much at all.

I also have a serious problem believing any intelligence can fully
'control' another. Espicially when you get into controlling SIs. The only
way to completely prevent loopholes and the like would, most likely, be a
serious limitation on possible actions. A Sysop type mechanism is only
really necessary if SIs in general can't be trusted. And, if that is the
case, and SIs can't be trusted, why bother with the Sysop idea at
all. Now, I can understand the idea of a Sysop protecting Humans and the
like from SIs, if needed.

> > I actually doubt many would want to play outside in the gloomi-verse
> anyway. I mean shifting great lumps of matter around with all sorts of
> stupid laws you can't change just won't compare to a place where almost
> anything is possible. It would be a lack of imagination on their part -
> they can do whatever they wanted to do on the outside, on the inside, and
> *far* more.

Actually, I doubt many would want to play in a VR. What's the point,
having infinite control of nothing would get boring very quickly, I
imagine. I can see where a self-indulgent human may think that life in a
VR that is totally under their control would be great. Heck, I can imagine
years worth of things that would be, shall we say, very entertaining. But
I have a feeling that such self-indulgence won't be as interesting to SIs,
who could essentially simulate anything they want internally at any time
anyway. The real universe, full of real matter and real problems, would
provide the only remaining challenges and opportunities to continue learning.

>Perhaps, but I still want my personal starship.. at least for a while :-)

Just one? Personally, I'd like to clone myself a couple of times and head
out in several directions at once. :>

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT