From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun May 17 2009 - 14:57:10 MDT
--- On Sun, 5/17/09, Charles Hixson <charleshixsn@earthlink.net> wrote:
> Matt Mahoney wrote:
> > I posted this question to the Singularity list (see http://www.listbox.com/member/archive/11983/2009/05/sort/time_rev/page/1/entry/2:189/20090515134821:90DF5230-4178-11DE-999C-A0AEEBC83AC0/
> ) but so far nobody knows. Maybe somebody on SL4 knows.
> >
> > My question is whether belief in immortality is
> computable. The question is important because if not, then
> we could never arrive at a satisfactory solution to the
> problem of death. We would forever be trying to solve the
> problem even after we have solved it.
> >
> > Assume an AIXI model in which the environment is known
> to the agent or easily learned. If the agent is rational and
> believes itself immortal, then it would make decisions in
> favor of maximizing payoffs over an infinite future rather
> than a finite one. For example, if you could increase your
> monthly retirement income by delaying your retirement date,
> then if you were immortal you would never retire. However,
> at no point would an observer know your decision.
> >
> > To put it another way, belief in mortality is
> recursively enumerable but not recursive. An observer might
> be able to examine your source code and get the answer, but
> generally, the problem seems incomputable because of Rice's
> theorem. But perhaps I am missing something?
> >
> > -- Matt Mahoney, matmahoney@yahoo.com
> To me the question seems ill-defined. It is involved
> with the definition of the self, and nobody has a good,
> commonly acceptable, and usable definition of the
> self. (Usable implies that it isn't fuzzy enough so
> that any actual disagreements can be hidden by ambiguities.)
I used the AIXI model to make the question precise. I defined an agent that believes it is immortal as one that has the goal of maximizing accumulated reward r(t) for t from now to infinity. If instead an agent believes it will die at time T, then it would rationally have the goal of maximizing accumulated reward summed from t to T. For a known environment, it would behave the same regardless of r(t) for all t > T.
The question is whether you can create an environment that can distinguish the two cases. Assume the agent knows the environment you choose and is rational, i.e. it is able to optimally solve the problem of accumulating reward over its (expected finite or infinite) lifetime for your chosen environment. You do not know T.
> E.g.: Is it dying if, over a period of time, you
> loose almost all of your current memories? What if
> this happens just because you get bored with them, so you
> edit them out?
Such an agent would not be rational if it computes that erasing memories causes it to choose a sub-optimal solution to the problem of maximizing accumulated r(t).
> Also, what if the universe is finitely bounded in
> time?
The question is about belief. If the agent believes that the lifespan of the universe is finite, then it would believe itself to be mortal. The question is whether you can test this belief.
> Does not dying (in some sense) for over a
> giga-year or so mean that you're immortal? It would
> seem that the answer was no, but what if you adjusted your
> perceived time rate so that time kept speeding up?
> (I'm sure there's some limit somewhere.)
If you allow r(t) to be continuous, then the infinite and finite cases are equivalent. For example, if t goes from 0 to infinity, then you could define r'(t) = r(1 - 1/t) over the interval from 0 to 1. If an agent can optimally solve one problem then it can solve the other. Then the question is whether you can distinguish an agent that optimizes the sum over [0, 1) from an agent that optimizes over [0, 1-1/T) where T could be arbitrarily large.
> Or what about merging minds with others? If you can't
> separate out again, have you died?
Under AIXI, this model is sub-optimal because an agent does not know about or control the other minds prior to the merge. The question is about testing an optimally rational agent.
> And some people believe that if you exist in multiple
> instances, and any one of this dies, then you have
> died. Some people believe that uploading prevents (or
> ensures) early death.
If an agent believes that these things cause death and believes that they will happen, then it believes in its mortality. The question is whether such beliefs can be detected.
> Without a good definition of self, these questions are
> unanswerable. Immortality isn't the sticking
> point. (A literal definition would say that
> immortality is impossible in a universe finitely bounded in
> time. If that suffices, then you don't need a
> definition of self. A simple "You can't do it" would
> suffice. So I've been assuming you meant something
> else.)
An agent might not know that the universe is finite or that it is mortal. It might still believe itself to be immortal, which we could detect if we observe it making decisions that postpone gratification arbitrarily far into the future. But I don't think such a test exists. (That is my question). I think that for any test, there is a sufficiently large T such that the two rational agents (one believing it will die at time T and the other believing itself to be immortal) will both give the same response.
The question is important to us because it means that regardless of any life prolonging mind or body enhancement, we will always have some doubt about our immortality even if we achieve it.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT