From: Matt Mahoney (matmahoney@yahoo.com)
Date: Wed May 20 2009 - 12:02:30 MDT
--- On Wed, 5/20/09, Benja Fallenstein <benja.fallenstein@gmail.com> wrote:
> From: Benja Fallenstein <benja.fallenstein@gmail.com>
> Subject: Re: [sl4] Is belief in immortality computable?
> To: sl4@sl4.org
> Date: Wednesday, May 20, 2009, 1:52 PM
> On Wed, May 20, 2009 at 7:00 PM, Matt
> Mahoney <matmahoney@yahoo.com>
> wrote:
> > Do there exist two computable real functions A(t) and
> B(t) defined over t in R+ such that
> >
> > integral_0^infinity A(t) dt > integral_0^infinity
> B(t) dt
> >
> > and
> >
> > integral_0^infinity A(t)P(t) dt <
> integral_0^infinity B(t)P(t) dt
> >
> > for all P != I?
>
> No. If P(t) is constant, then integral_0^infinity cF(t) =
> c
> integral_0^infinity F(t) (or am I missing something?).
> Thus, if the
> second inequality holds for all P != I, it must also hold
> for P = I.
>
> > In other words, are there A and B such that a rational
> agent that is certain of its immortality would always choose
> A, and a rational agent that is uncertain would always
> choose B?
> >
> > If not, then I claim that rational certainty of
> immortality is impossible.
>
> As I hinted at in my other mail, I think that the right way
> to extend
> decision theory to a potentially immortal agent is to
> compare the
> expected utilities of all possible strategies over the
> whole lifetime
> of the agent. What you are doing is that you are trying to
> compute
> expected utilities for the actions taken on day one (=
> prefixes of
> whole-lifetime strategies), and you define the expected
> utility of an
> action to be the supremum of the expected utilities of all
> lifetime
> strategies that start with that action, even if the
> supremum is not a
> maximum (ie, when the set of strategies starting with that
> action does
> not have a maximum). This doesn't seem like a good
> definition of
> "rational decision" to me; if by picking A the agent can
> get a higher
> payoff than with any strategy that starts with picking B,
> then IMO the
> agent should pick A, and the fact that the "expected
> utilities" of A
> and B are equal just means that the proper definition of
> the expected
> utility of an action is the maximum, not the supremum of
> the EUs of
> the strategies starting with this action (so B does not
> *have* an EU).
>
> However, while I don't think your mathematical argument
> stands in the
> way of it, I don't see how we could ever be rationally
> *completely*
> sure of anything, so I don't expect to have your (1) and
> (3). However,
> if we could get evidence that makes it exponentially
> probable that
> we're immortal, that seems just fine to me at least on the
> face of it.
>
> All the best,
> - Benja
>
I think you're right. From your last email, I saw that the utility B could depend on P, which I didn't consider.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT