From: Jeff Bone (jbone@jump.net)
Date: Fri Dec 07 2001 - 13:34:31 MST
Bryan Moss wrote:
> Gordon Worley wrote:
>
> > > Logic, common sense, and actuarial reasoning should tell
> > > us that that *absolute* safety is an impossibility, and my
> > > gut tells me that attempting to task some Power with
> > > providing it is a recipe for disaster.
> >
> > We've already been down this road: anthropomorphic thinking.
> >
> > We cannot be 100% safe, but we'll try to get as damn close
> > to it as possible and have escape routes in case all hell
> > breaks loose.
>
> A wild ride. Personally I see it as, we're either safe or
> potentially screwed.
I agree with the statement, but not the conclusions...
Bottom line, as long as there is any connection whatsoever to the
physical universe we are almost certainly and absolutely screwed in
the long run: either the universe is open and we experience the heat
death due to 2LT, or its closed and we experience collapse, modulo
some Tipler-esque imaginary infinity. Give that safety is
*physically* an impossibility of absolute safety, we just need to
realistically assess the tradeoffs between the costs and benefits of
any desired level of safety.
> Basically, either some morality holds for
> all intelligences or it does not. For some morality to hold
> for all intelligences I think the following must be true: (a)
> finding the optimal intelligence is an intractable problem;
> and (b) comparing the optimality of one intelligence to
> another is an intractable problem. If both of these prove to
> be true then one intelligence has no grounds to favour itself
> over another (or vice versa) and their morality must be a
> superset of ours. In other words, it's all sunshine and
> lollipops because we've got SIs[*] batting for our team. If
> either one of these proves to be false then the drive toward
> optimality *might* result in us being screwed (where "screwed"
> means our evolved morality is at odds with a general morality
> and we might have to do things we don't "like").
This is a good argument. I've got to jet but may try to revisit it
tonite.
> At the moment I think the "safe" scenario is the most likely.
And again, it's not: "absolute" safety is, quite simply, a physical
impossibility. Attempting to provide it is fun and quixotic, kind of
like trying to paint the moon with using those little laser pointers.
:-)
jb
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT