Re: [sl4] Is belief in immortality computable?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon May 18 2009 - 13:45:42 MDT


--- On Mon, 5/18/09, Stuart Armstrong <dragondreaming@googlemail.com> wrote:

> > An agent might not know that the
> universe is finite or that it is mortal. It might still
> believe itself to be immortal, which we could detect if we
> observe it making decisions that postpone gratification
> arbitrarily far into the future. But I don't think such a
> test exists. (That is my question). I think that for any
> test, there is a sufficiently large T such that the two
> rational agents (one believing it will die at time T and the
> other believing itself to be immortal) will both give the
> same response.
>
> A continuous sliding scales of investments, returning t^2 at time t.

Let me make sure I understand correctly: you pay me $1 per day for t days. You choose t. After the last payment, I pay you $t^2?

A rational agent expecting to live T days will choose t = T-1. An agent expecting to live forever will pay forever. But in the latter case, you will never know if t is infinite or just really big.

Or do you mean: you choose a number t right now, and then I pay you $t^2 after t days?

This tests also fails for agents that communicate using prefix-free strings over finite alphabets, because for any number t, there is a larger number that takes longer to describe. In the first case, the agent will choose t = BusyBeaver(T-1). In the second case, you will wait forever for the agent to finish choosing.

Agents believing in immortality can be expected to make decisions that seem irrational to us. Suppose you offer an agent a choice of $1 per day or $2 per day forever (adjusting for inflation). Then the choices are equivalent because they both have the same sum (aleph-null). However this test fails if the second agent chooses $2 per day by chance.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT