Re: [sl4] Is friendliness computable? (was Re: Why extrapolate?)

From: Gwern Branwen (gwern0@gmail.com)
Date: Tue Oct 27 2009 - 10:58:21 MDT


On Mon, Oct 26, 2009 at 9:53 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> Gwern Branwen wrote:
>> I noted this line with interest, given my recent* argument that
>> assigning nonzero probabilities to each member of an infinite set
>> leads to nonsensical conclusions.
>
> Why should it? A data compressor assigns a nonzero probability to all possible input strings such that they add to 1.
>  -- Matt Mahoney, matmahoney@yahoo.com

Sorry, I should be clearer (I think I was in the comment): nonzero
with a minimum / lower-bound leads to the nonsensical conclusion.

If I understand the proofs correctly, we get a sane AI if we don't
mind defaulting to, say, the reciprocal of the postulated utility - in
which case every deal by the mugger has a nonzero probability, but
there is no 'leveling off' in probability as the promised utility
rises. It's only if we don't shrink by as much as the reciprocal (or
more) that we get the divergency issue.

(This doesn't seem too bad a solution to me, but the question is how
exactly the probabilities should shrink. If anyone knows of any work
showing that the reciprocal leads to biting bullet X, or
discounting-more/not-discounting-exactly-that-much leads to bullet Y,
I'd appreciate learning of it!)

-- 
gwern


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT