From: James Higgins (firstname.lastname@example.org)
Date: Fri Aug 03 2001 - 19:50:14 MDT
At 08:58 AM 8/3/2001 -0700, you wrote:
> > From: James Higgins <email@example.com>
> > At 10:50 AM 8/1/2001 -0700, Durant Schoon wrote:
> > >Ok, it's time to ask if this scenario is more than a bad Sci-Fi
> > >plot. I think it relates to a class of problems such as:
> > >
> > >1) Should an SI spend time looking for other hidden AI's as soon
> > > as possible?
> > >2) Should an SI spend time looking for ET?
> > >3) Should an SI spend time trying to determine if it is already
> > > in a simulation?
> > All of the above? If it is truly an "SI", then these tasks should be
> > trivial for it (at least when comparing its ability to ours). It
> should be
> > able to spot #1 easily, since it is an SI looking for AIs.
>I was thinking more along the lines of "How much if any time should an SI
>(or an AI) spend?". It's one of those things that might seem unlikely, but
>have a huge impact if found...or it could be a wild goose chase resulting
>in nothing. Should an SI spend time wondering if there is a God? Or being
>in a simulation (below)?
I don't think the quantity of time is relevant. The SI will likely think
much, much faster than we do, and almost certainly must be capable of
multiple threads of consciousness. Therefor, I don't think it need
dedicate any time, in particular, to these tasks. It would probably be
worth having a low priority background thread running for each as soon as
any major, impending problems are handled. I doubt it would ever devote
100% of its effort on any of these tasks, unless it solved every other
existing problem or one of these became extremely urgent for some reason.
> > The last one could be a piece
> > of cake for an SI, or maybe not. Heck if #3 is true, creating an SI
> > may not be possible at all or it may halt the simulation (god, I hope that
> > isn't it). Or the goal of the simulation may have been to produce an
> > SI! Do yourself a favor, don't get me started on #3.
>Hmm, if the simulation is created by an SI of greater intelligence, S'
>(S prime), don't you think that S' would have sealed off the simulation
>so that it can't be altered or halted from "inside"?
Why would our SI halt the simulation? I meant that the simulation could be
setup to halt if a Singularity occurs within.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT