From: Brian Atkins (brian@posthuman.com)
Date: Wed Mar 21 2001 - 11:48:20 MST
Dale Johnstone wrote:
>
> Brian wrote:
> >Is it possible to have other scenarios where the sysop does not infect
> >all the mass in the solar system, while still ending all evil? I think it
> >could be done through heavy surveillance, including both real and virtual
> >realities. But this would be more dangerous IMO since if someone escapes
> >the surveillance, and builds a competitor SI that then infects all the matter
> >then you've got problems.
>
> I've never been completely happy with the SysOp scenario. Having to micro-manage (nano-manage?) the entire solar system is terribly wasteful. Watching out for silly people playing chicken on nukes, trying to keep them safe is just plain crazy. Let them do that in a VR or don't enable them do that at all. They never had the 'right' to survive nuclear explosions to begin with, so it's not like we're taking anything away.
Well I think in the after-Singularity era you have to turn such questions
around.. you have to ask, if we can do something like end all evil, why not
do it? The only reason I see not to do it is if a) the entities (SIs) capable
of doing it are unwilling to spend their time doing it, or b) it would take
too much of the solar system's resources.
>
> If we can have faith in one SI doing the right thing, why not in more than one? The Friendliness attractor that keeps a 'SysOp' SI on the right path will also keep other SIs on the right path too. If a second intelligence can clearly demonstrate it's mind is Friendly, then I see no reason why the first should have to baby-sit the other. It's a mature intelligence - it won't *want* to do anything evil.
Right I agree that if a SI can prove itself friendly somehow then there
should be no need to monitor it.
>
> Multiplication may be unavoidable - assuming information can travel no faster than light, a SysOp that allowed physical humans to travel in space would have to essentially split it's mind in order to function promptly at the local level. I can imagine some situations in which it's protection might be compromised by the communication latency across the entire mind. Of course, it would realize this and take conservative preventative measures.
>
> How much easier it would be if all minds in the physical world were Friendly and supervised themselves. Non-friendly creatures like regular humans can be safely kept inside a VR, or perhaps on Earth (which *would* need micro-managing).
As long as your surveil the the people with access to real matter, then it
may indeed be possible to have minimal requirements for the VR people. You'd
have to protect their data of course.
>
> I really don't like the idea of trying to cope with hordes of minds all going out of their way to subvert Friendliness. It's just asking for trouble. It would be better to never reach that dangerous situation to begin with. The simple solution is not to give them powerful abilities until they've demonstrated they're sufficiently Friendly. However, they have the option of a safe VR if they want to play rough. Physical reality would still need managing, but not to the same degree. That would seem safer and Friendlier to me. Humans have always placed restrictions on individuals for the good of the whole, I have no moral problems with common-sense restrictions if it's grossly wasteful of resources or risky without.
Right the main thing you want to avoid is something like a Blight that goes
hidden for a while before attacking everything.
>
> Of course, any speculation about the best system of management is largely irrelevant since a >H intelligence will outperform us all in that regard. Our priority should be to make the best mind possible, as soon as possible.
>
> I actually doubt many would want to play outside in the gloomi-verse anyway. I mean shifting great lumps of matter around with all sorts of stupid laws you can't change just won't compare to a place where almost anything is possible. It would be a lack of imagination on their part - they can do whatever they wanted to do on the outside, on the inside, and *far* more.
Perhaps, but I still want my personal starship.. at least for a while :-)
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT