Re: More MWI implications: Altruism and the 'Quantum Insurance Policy'

From: maru (marudubshinki@gmail.com)
Date: Sun Dec 12 2004 - 09:18:00 MST


Here's a flaw Marc: What you are essentially trying to do is shift the
'average' of the multiverse by making it less likely that
important people could be killed repeatedly in closely related
universes, right, and shifting it presumably for the better.

Now, I have no physics degree, but it seems to me there are two problems
with your idea: The first is universes diverge pretty quickly
(chaotically)-there are a lot of possiblities to explore, so any even
drastic action would affect only a few. The second is better:
Doesn't MWI say that *all* possible universes exist? So how could you
possibly affect the totality? All you could do is maybe improve
your own local universe (the essence of altruism, neh?), in which case
your suggestion is null, since flipping 50-50 between two routes
to work equally likely to kill you isn't going to help anything.

~Maru
Marc Geddes wrote:

>O.K, my latest thought may actually qualify as the
>weirdest argument ever to be posted on SL4 ;) I can’t
>see any flaw in my argument, but I just thought of it
>so I may be speaking nonsense here. Have a read and
>see what you think. Obviously you need to make the
>starting assumption that MWI of QM is true for the
>argument to make sense.
>
>Thinking about MWI of QM, it occurred to me that a
>true altruist needs to consider the well being of
>sentients in all the alternative QM branches, not just
>this particular branch.
>
>Now...suppose that something bad were to happen to
>leading FAI researchers like Eli and Ben? Say they
>were both hit by trucks. Then I think it's fair to
>say that the chances of a successful Singularity would
>be somewhat reduced. But what would the situation in
>the multiverse as a whole be if we lost Eli and Ben?
>Well, assuming that the human brain is a classical
>computer and doesn't use quantum indeterminacy (a
>pretty reasonable assumption), then it is likely that
>the deaths of Eli and Ben would already be largely
>determined by classical physics (at least in the QM
>branches that diverged from this time forward). So if
>we lost Eli and Ben in this QM branch, chances are
>they would die in most of the other QM branches of the
>multiverse as well. So if Eli and Ben were hit by
>trucks here, and it was mostly classical physics at
>work leading to their death (a likely assumption, as
>mentioned) then they'd probably be dead in something
>like 99% of all the other QM branches as well. It
>would be bad news for the sentients in most other
>branches of the multiverse.
>
>I realized that there is a way to 're-distribute risk'
>across the multiverse, so as to ensure that a minimum
>fraction of alternative versions of Eli and Ben would
>survive! As I mentioned a true altruist has to
>consider the well being of sentients all the
>alternative branches. It would be bad news for most
>sentients in the multiverse if leading A.I researchers
>were lost. Therefore altruist A.I researchers should
>follow my 'Quantum Insurance Policy' in order to safe
>guard the alternative versions of themselves!
>
>Here's how it works. The reason why the deaths of
>leading A.I researchers in this QM branch would cause
>a problem across the multiverse (from this time
>forward) is the assumption that largely classical
>physics is at work in the human brain. So decisions
>taken by the version of yourself here in this QM
>branch are globally linked to decisions taken by all
>the alternative versions of yourself across the
>multiverse (in the time tracks that diverge from this
>time forward). In short, given the reasonable
>assumption that classical physics is largely at work
>in your brain, if you do something dumb here then most
>of the alternative versions of yourself have done the
>same dumb thing. across the multiverse.
>
>Here's how to safeguard some of the alternative
>versions of yourself: simply base some of your
>decisions on quantum random events. There are devices
>that can easily generate quantum random numbers. For
>instance at the web-site:
>http://www.fourmilab.ch/hotbits/ you can get quantum
>random numbers from a lab where radioactive decay is
>used to generate them. Simply link some of your daily
>decisions to these numbers. For instance at the
>beginning of the day you might draw up a table saying:
> I'll take this route to work if I get a quantum
>heads, and that route to work if I get a quantum
>tails.
>
>See then what happens across the multiverse if leading
>A.I researchers started to use this strategy: the high
>correlations between alternative versions of
>themselves across the multiverse are broken. The
>effect of this is to' re-distribute' risk across the
>multiverse, which actually works to ensure that some
>minimum fraction of your alternative selves are
>shielded from bad things happening. For instance
>suppose Eliezer was hit by a truck walking to work.
>Suppose he'd been linking the decision about which
>route to walk to work to a 'quantum coin flip'. Then
>half the alternative versions of himself would have
>taken another route to work and avoided the truck. So
>in 50% of QM branches he'd live on. Compare that to
>the case where Eli's decision about which route to
>walk to work was being made mostly according to
>classical physics. If something bad happened to him
>he'd be dead in say 99% of QM branches. The effect of
>the quantum decision making is to re-distribute risk
>across the multiverse. Therefore the altruist
>strategy has to be to deploy the 'quantum decisions'
>scheme to break the classical physics symmetry across
>the multiverse.
>
>In fact the scheme can be used to redistribute the
>risk of Unfriendly A.I across the multiverse. There
>is a certain probability that leading A.I researchers
>will screw up and create Unfriendly A.I. Again, if
>the human brain is largely operating off classical
>physics, a dumb decision by an A.I researcher in this
>QM branch is largely correlated with the same dumb
>decision by alternative versions of that researcher in
>all the QM branches divergent from that time on. As
>an example: Let's say Ben Goertzel screwed up and
>created and Unfriendly A.I because of a dumb decision.
> The same thing happens in most of the alternative
>branches if his decisions were caused by classical
>physics! But suppose Ben had been deploying my
>'quantum insurance scheme', whereby he had been basing
>some of his daily decisions off quantum random
>numbers. Then there would be more variation in the
>alternative versions of Ben across the Multiverse. At
>least some versions of Ben would be less likely to
>make that dumb decision, and there would be an assured
>minimum percentage of QM branches avoiding Unfriendly
>A.I.
>
>
>=====
>"Live Free or Die, Death is not the Worst of Evils."
> - Gen. John Stark
>
>"The Universe...or nothing!"
> -H.G.Wells
>
>
>Please visit my web-sites.
>
>Sci-Fi and Fantasy : http://www.prometheuscrack.com
>Mathematics, Mind and Matter : http://www.riemannai.org
>
>Find local movie times and trailers on Yahoo! Movies.
>http://au.movies.yahoo.com
>
>
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST