From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sun Dec 12 2004 - 22:25:22 MST
--- Edmund Schaefer <edmund.schaefer@gmail.com>
>
> > For instance
> > suppose Eliezer was hit by a truck walking to
> work.
> > Suppose he'd been linking the decision about which
> > route to walk to work to a 'quantum coin flip'.
> Then
> > half the alternative versions of himself would
> have
> > taken another route to work and avoided the truck.
> So
> > in 50% of QM branches he'd live on. Compare that
> to
> > the case where Eli's decision about which route to
> > walk to work was being made mostly according to
> > classical physics. If something bad happened to
> him
> > he'd be dead in say 99% of QM branches. The
> effect of
> > the quantum decision making is to re-distribute
> risk
> > across the multiverse. Therefore the altruist
> > strategy has to be to deploy the 'quantum
> decisions'
> > scheme to break the classical physics symmetry
> across
> > the multiverse.
>
> This only works because our fictional Eli assigned a
> 99% probability
> to the lethal path being more desirable. Your
> "insurance policy" boils
> down to the following piece of advice: If you make a
> decision that
> you're really sure about, and happen to be wrong,
> you're better off
> flipping a coin. Sure, that's sound advice, but it
> doesn't do me any
> good. If I knew I was wrong, it wouldn't be very
> sane of me to keep
> that 99% estimate of desirability. You started with
> a *really* bad
> decision, scrapped the decision in favor of a
> fifty-fifty method, saw
> that it drastically improved the survival of your
> quantum descendants,
> and said "behold the life-saving power of
> randomness". Sorry, but it
> doesn't work like that. Intelligence works better
> than flipping coins.
> If you trust coin flips instead of intelligence,
> you're more likely to
> get killed. Applying this to MWI translates it to
> "If you go with
> intelligence, you survive in a greater number of
> quantum realities."
Hang on there. The fictional Eli in my exmaple didn't
*assume* anything. The example amounted to the Eli
asking: *What if* I'm living in a branch with a 99%
chance of my death soon. I don't know for certain
that the decision I'm taking is really really bad.
I'm asking *what if* the decision is that bad.
Then the reasoning goes: Is there something that
could be done to 'spread' the risk between all the
sentient copies of myself that diverge across the
multiverse from now on?
The quantum coin flip can then be used to 'insure'
some of the alternative copies of Eli against the said
bad decision. The quantum coin does not have be
50-50. It can be 'weighted' according to a factoring
in of all rational information.
>
> > In fact the scheme can be used to redistribute the
> > risk of Unfriendly A.I across the multiverse.
> There
> > is a certain probability that leading A.I
> researchers
> > will screw up and create Unfriendly A.I. Again,
> if
> > the human brain is largely operating off classical
> > physics, a dumb decision by an A.I researcher in
> this
> > QM branch is largely correlated with the same dumb
> > decision by alternative versions of that
> researcher in
> > all the QM branches divergent from that time on.
> As
> > an example: Let's say Ben Goertzel screwed up and
> > created and Unfriendly A.I because of a dumb
> decision.
> > The same thing happens in most of the alternative
> > branches if his decisions were caused by classical
> > physics! But suppose Ben had been deploying my
> > 'quantum insurance scheme', whereby he had been
> basing
> > some of his daily decisions off quantum random
> > numbers. Then there would be more variation in
> the
> > alternative versions of Ben across the Multiverse.
> At
> > least some versions of Ben would be less likely to
> > make that dumb decision, and there would be an
> assured
> > minimum percentage of QM branches avoiding
> Unfriendly
> > A.I.
>
> And if he doesn't screw up the AI? What if Ben was
> right? Your
> insurance scheme just killed half of that branch of
> the multiverse
> because a lot of Bens decided to go on coin flips
> instead of a correct
> theory, and I don't see why the second batch of
> fifty-gazillion
> sentients is less valuable than the first batch.
> Also, keep in mind,
> there's going to be some that hit the ultimately
> desirable state.
> Somewhere out there there's a quantum reality where
> Friendly AI
> spontaneously materialized out of a gas cloud. You
> can't really drive
> the number of desirable quantum realities down to
> zero, any more than
> you can accurately assign something a Bayesian
> probability of zero.
>
The qauntum coin flip does not have to be 50-50. It
can be 'weighted' to factor in all rational data as
per Bayes theorem.
Suppose Ben had performed all the rational analysis he
could. He would still be left with some probability
distribution for certain courses of action. He could
then 'bias' the qauntum coin according to this
probability distribution.
In the fictional exmaple Ben is *not* assuming that
the decision he is about to take is bad. He saying it
*might* be bad. He is then asking if there is
anything he can do to 'spread the risk' across
alternative versions of himself in the multiverse, so
as to ensure that some minimum fraction of hs
alternative selves experience a good outcome.
=====
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
-H.G.Wells
Please visit my web-sites.
Sci-Fi and Fantasy : http://www.prometheuscrack.com
Mathematics, Mind and Matter : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT