**From:** Tim Freeman (*tim@fungible.com*)

**Date:** Mon Sep 08 2008 - 07:25:44 MDT

**Next message:**Eliezer Yudkowsky: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**Previous message:**Tim Freeman: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**In reply to:**Wei Dai: "[sl4] Bayesian rationality vs. voluntary mergers"**Next in thread:**Eliezer Yudkowsky: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

From: "Wei Dai" <weidai@weidai.com>

Date: Sun, 7 Sep 2008 15:36:48 -0700

*>They also each think they have a good guess of the lock combination,
*

*>assigning a 90% probability of being correct, but their guesses are
*

*>different due to having different priors. They have until tomorrow to decide
*

*>whether to try their guesses or not, but in the mean time they have to
*

*>decide whether or not to merge. If they don't merge, they will each try a
*

*>guess and expect to get .9*3=2.7 utils, but if they do merge into a new
*

*>Bayesian AI with an average of their priors, the new AI will assign .45
*

*>probability of each guess being correct, and since the expected utility of
*

*>trying a guess is now .45 * 3 < 2, it will decide not to try either
*

*>combination. The original AIs, knowing this, would refuse to merge.
*

If we assume that

* both AI's are using "the" Universal Prior or something analogous

to assign prior probabilities, and

* both AI's have made the same observations,

then they can *still* disagree about the prior probabilities.

The Universal Prior depends on the size of various Turing machines

that compute the observed past and make a prediction about the future.

There isn't a unique way to choose the encoding of the Turing

machines, so the two AI's might have chosen different implementation

languages for their Turing machines and therefore get different priors

based on the same observations.

(Does the encoding of the Turing machines make a difference in the

limit as the set of past observations gets larger? I don't know. I

hope not.)

If there is a conflict with large-enough cost when both AI's want to

attempt to open the safe, then it might be rational for them to merge

into something that does a coin-flip and based on that uses the priors

of one of the source AI's rather than an average.

-- Tim Freeman http://www.fungible.com tim@fungible.com

**Next message:**Eliezer Yudkowsky: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**Previous message:**Tim Freeman: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**In reply to:**Wei Dai: "[sl4] Bayesian rationality vs. voluntary mergers"**Next in thread:**Eliezer Yudkowsky: "Re: [sl4] Bayesian rationality vs. voluntary mergers"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:01:03 MDT
*