Re: Envisioning sysop scenarios Re: Universal Uplift as an alternative to the Sysop scenario

From: Dale Johnstone (DaleJohnstone@email.com)
Date: Wed Mar 21 2001 - 09:37:25 MST


Brian wrote:
>Is it possible to have other scenarios where the sysop does not infect
>all the mass in the solar system, while still ending all evil? I think it
>could be done through heavy surveillance, including both real and virtual
>realities. But this would be more dangerous IMO since if someone escapes
>the surveillance, and builds a competitor SI that then infects all the matter
>then you've got problems.

I've never been completely happy with the SysOp scenario. Having to micro-manage (nano-manage?) the entire solar system is terribly wasteful. Watching out for silly people playing chicken on nukes, trying to keep them safe is just plain crazy. Let them do that in a VR or don't enable them do that at all. They never had the 'right' to survive nuclear explosions to begin with, so it's not like we're taking anything away.

If we can have faith in one SI doing the right thing, why not in more than one? The Friendliness attractor that keeps a 'SysOp' SI on the right path will also keep other SIs on the right path too. If a second intelligence can clearly demonstrate it's mind is Friendly, then I see no reason why the first should have to baby-sit the other. It's a mature intelligence - it won't *want* to do anything evil.

Multiplication may be unavoidable - assuming information can travel no faster than light, a SysOp that allowed physical humans to travel in space would have to essentially split it's mind in order to function promptly at the local level. I can imagine some situations in which it's protection might be compromised by the communication latency across the entire mind. Of course, it would realize this and take conservative preventative measures.

How much easier it would be if all minds in the physical world were Friendly and supervised themselves. Non-friendly creatures like regular humans can be safely kept inside a VR, or perhaps on Earth (which *would* need micro-managing).

I really don't like the idea of trying to cope with hordes of minds all going out of their way to subvert Friendliness. It's just asking for trouble. It would be better to never reach that dangerous situation to begin with. The simple solution is not to give them powerful abilities until they've demonstrated they're sufficiently Friendly. However, they have the option of a safe VR if they want to play rough. Physical reality would still need managing, but not to the same degree. That would seem safer and Friendlier to me. Humans have always placed restrictions on individuals for the good of the whole, I have no moral problems with common-sense restrictions if it's grossly wasteful of resources or risky without.

Of course, any speculation about the best system of management is largely irrelevant since a >H intelligence will outperform us all in that regard. Our priority should be to make the best mind possible, as soon as possible.

I actually doubt many would want to play outside in the gloomi-verse anyway. I mean shifting great lumps of matter around with all sorts of stupid laws you can't change just won't compare to a place where almost anything is possible. It would be a lack of imagination on their part - they can do whatever they wanted to do on the outside, on the inside, and *far* more.

Regards,
Dale Johnstone.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT