Re: [sl4] Bayesian rationality vs. voluntary mergers

From: Tim Freeman (tim@fungible.com)
Date: Mon Sep 08 2008 - 22:45:46 MDT


From: "Wei Dai" <weidai@weidai.com>
>It's not true that the period of "irrationality" (I put it in quotes because
>it's irrational according to standard decision theory, but not according to
>common sense) has to be short.

You could be right, but I don't think your example supports your conclusion.

>Suppose the merged AI starts trying to convert the universe to
>paperclips based on a coin toss, but after doing 10% of the universe,
>realizes that the staples goal has a much higher chance of success,
>which it didn't know at the beginning. I think the two original AIs
>would have agreed that in this circumstance the merged AI should flip
>another coin to decide whether or not to switch goals.

I think the two original AI's would have agreed that the merged AI
should go for staples without any coin flip, and symmetrically if the
coin flip had dictated a conversion to paperclips that was later
discovered to be infeasible.

So after the coin flip, the utility function for the merged AI might
be "51 for a universe of 60% staples, or 49 for a universe of 60%
paperclips, or nothing if I don't get enough staples or enough
paperclips". Swap the numbers if the coin flip comes out the other
way.

I have no intuition at this point -- there might be some other example
that does support your conclusion, but I can't see it right now, nor
can I see a path to proving that irrationality from negotiation should
be short-lived.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT