**From:** Eliezer Yudkowsky (*sentience@pobox.com*)

**Date:** Mon Sep 08 2008 - 08:39:08 MDT

**Next message:**Wei Dai: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**Previous message:**Tim Freeman: "Different priors (was Re: [sl4] Bayesian rationality vs. voluntary mergers)"**In reply to:**Wei Dai: "[sl4] Bayesian rationality vs. voluntary mergers"**Next in thread:**Tim Freeman: "Localizing (was Re: [sl4] Bayesian rationality vs. voluntary mergers)"**Reply:**Tim Freeman: "Localizing (was Re: [sl4] Bayesian rationality vs. voluntary mergers)"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Sun, Sep 7, 2008 at 3:36 PM, Wei Dai <weidai@weidai.com> wrote:

*>
*

*> The problem here is that standard decision theory does not allow a
*

*> probabilistic mixture of outcomes to have a higher utility than the
*

*> mixture's expected utility, so a 50/50 chance of reaching either of two
*

*> goals A and B cannot have a higher utility than 100% chance of reaching A
*

*> and a higher utility than 100% chance of reaching B, but that is what is
*

*> needed in this case in order for both AIs to agree to the merger.
*

The obvious solution is to integrate the coin into the utility

function of the offspring. I.e., <coin heads, paperclips> has 1 util,

<coin tails, paperclips> has 0 utils.

Obvious solution 2 is to flip a quantum coin and have a utility

function that sums over Everett branches. Obvious solution 3 is to

pick a mathematical question whose answer neither AI knows but which

can be computed cheaply using a serial computation long enough that

only the offspring will know.

Of course, just because something is obvious doesn't mean it can't be flawed.

*> The second example shows how a difference in the priors of two AIs, as
*

*> opposed to their utility functions, can have a similar effect. Suppose two
*

*> AIs come upon an alien artifact which looks like a safe with a combination
*

*> lock. There is a plaque that says they can try to open the lock the next
*

*> day, but it will cost $1 to try each combination. Each AI values the
*

*> contents of the safe at 3 utils, and the best alternative use of the $1 at 2
*

*> utils. They also each think they have a good guess of the lock combination,
*

*> assigning a 90% probability of being correct, but their guesses are
*

*> different due to having different priors. They have until tomorrow to decide
*

*> whether to try their guesses or not, but in the mean time they have to
*

*> decide whether or not to merge. If they don't merge, they will each try a
*

*> guess and expect to get .9*3=2.7 utils, but if they do merge into a new
*

*> Bayesian AI with an average of their priors, the new AI will assign .45
*

*> probability of each guess being correct, and since the expected utility of
*

*> trying a guess is now .45 * 3 < 2, it will decide not to try either
*

*> combination. The original AIs, knowing this, would refuse to merge.
*

I presume you're localizing the difference to the priors, because if

the two AIs trust each other's evidence-gathering processes, Aumann

agreement prevents them from otherwise having a known disagreement

about posteriors. But in general this is just a problem of the AIs

having different beliefs so that one AI expects the other AI to act

stupidly, and hence a merger to be more stupid than itself (though

wiser than the other). But remember that the alternative to a merger

may be competition, or failure to access the resources of the other AI

- are the differences in pure priors likely to be on the same scale,

especially after Aumann agreement and the presumably large amounts of

washing-out empirical evidence are taken into account?

I haven't read this whole thread, so I don't know if someone was

originally arguing that mergers were inevitable - if that was the

original argument, then all of Wei's objections thereto are much

stronger.

-- Eliezer Yudkowsky Research Fellow, Singularity Institute for Artificial Intelligence

**Next message:**Wei Dai: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**Previous message:**Tim Freeman: "Different priors (was Re: [sl4] Bayesian rationality vs. voluntary mergers)"**In reply to:**Wei Dai: "[sl4] Bayesian rationality vs. voluntary mergers"**Next in thread:**Tim Freeman: "Localizing (was Re: [sl4] Bayesian rationality vs. voluntary mergers)"**Reply:**Tim Freeman: "Localizing (was Re: [sl4] Bayesian rationality vs. voluntary mergers)"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:01:03 MDT
*