From: Peter de Blanc (firstname.lastname@example.org)
Date: Mon May 18 2009 - 17:38:52 MDT
Matt Mahoney wrote:
> I used the AIXI model to make the question precise. I defined an agent that believes it is immortal as one that has the goal of maximizing accumulated reward r(t) for t from now to infinity. If instead an agent believes it will die at time T, then it would rationally have the goal of maximizing accumulated reward summed from t to T. For a known environment, it would behave the same regardless of r(t) for all t > T.
Matt, I think the basic problem here is using an unbounded utility
function. Since you'll get infinite expected utilities, there's no way
to choose between actions. You'd have this problem whether or not you
believe in immortality, as long as your utility function is unbounded.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT