From: Durant Schoon (firstname.lastname@example.org)
Date: Fri Aug 03 2001 - 09:58:04 MDT
> From: James Higgins <email@example.com>
> At 10:50 AM 8/1/2001 -0700, Durant Schoon wrote:
> >Ok, it's time to ask if this scenario is more than a bad Sci-Fi
> >plot. I think it relates to a class of problems such as:
> >1) Should an SI spend time looking for other hidden AI's as soon
> > as possible?
> >2) Should an SI spend time looking for ET?
> >3) Should an SI spend time trying to determine if it is already
> > in a simulation?
> >Of course only an SI should decide these things, but we as humans
> >already pour resources into #2. Maybe the DOD spends money on #1.
> >And #3...well, that's deliciously undetectable isn't it ;-)
> All of the above? If it is truly an "SI", then these tasks should be
> trivial for it (at least when comparing its ability to ours). It should be
> able to spot #1 easily, since it is an SI looking for AIs.
I was thinking more along the lines of "How much if any time should an SI
(or an AI) spend?". It's one of those things that might seem unlikely, but
have a huge impact if found...or it could be a wild goose chase resulting
in nothing. Should an SI spend time wondering if there is a God? Or being
in a simulation (below)?
> If there has
> been contact, #2 should be simple since it should be easy an SI to get the
> real truth out of any human. If there has not been any contact, #2 would
> be incredibly difficult if they didn't want to be found since they are
> likely to either be or have SIs themselves.
Again, I was thinking: "How much time should an SI spend looking for aliens?"
Perhaps after all the major problems are resolved, ie. we are safely drifting
in space reliant on our own portable energy sources, famine and disease
erased, an SI might turn ver attention to ET life.
Currently there are humans working at SETI full time, while others are
dying of malnutrition. Admittedly you don't see me devoting my life to
saving lives...I'm making frivolous movies. Perhaps this is my twofold
answer to Amara's question:
It is good to hasten the advent of the Singularity because:
1) We can save lives and improve quality of life.
2) We can avert disasters of mismanaging potentially dangerous (yet
beneficial) new technologies (nanotech, bioengineering).
That is, we want to create a source of decision making
(an SI) that is more reliable than humans in the task
of ensuring our safety.
> The last one could be a piece
> of cake for an SI, or maybe not. Heck if #3 is true, creating an SI either
> may not be possible at all or it may halt the simulation (god, I hope that
> isn't it). Or the goal of the simulation may have been to produce an
> SI! Do yourself a favor, don't get me started on #3.
Hmm, if the simulation is created by an SI of greater intelligence, S'
(S prime), don't you think that S' would have sealed off the simulation
so that it can't be altered or halted from "inside"?
Oh wait, I wasn't supposed to get you started on #3 ;-)
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT