From: PETER JENKINS (peterjenkins@rogers.com)
Date: Sun Oct 21 2007 - 20:11:16 MDT
Rolf Asks:
> Did you mean to only say that self-motivated long range planning would be futile? I would argue that altruistic long-range planning is not necessarily affected by the analysis. Example:
>(Making up figures here, the actual figures don't matter much to the analysis) Suppose that every non-simulated civilization produces, on average, self-aware simulations equivalent to 10^100 civilizations, .001% of which are flawless ancestor simulations. A basic vanilla Simulation Argument would say that the odds are 10^95 to 1 that you live in a simulation. However, any decisions you make will affect 10^100 simulated civilizations. So in basic vanilla utilitarianism, your actions are still dominated by a factor of 10,000 to 1 by considerations of "what should I do if we do *not* live in an ancestor simulation?"
You are using an expected utility analysis which says that if there is a 1 in 10^95 chance that you will benefit 10^100 simulated civilizations with long-range planning then you should undertake such planning, since the remote chance of there being any effect at all would be outweighed by the large extent of the effect if it occurs. Although this is an intriguing form of analysis in theory, I think that it is unlikely that the flawed sims (where presumably it is obvious to the AI inhabitants that they are in a sim) would outnumber the flawless sims by such a huge amount. Alternatively if these flawed sims were created in such great numbers, they would be quickly terminated as inflicting unnecessary suffering on the inhabitants and moreover not providing any valuable experimental data, so they would not factor much at all into the equation. I am currently working on a paper on the issue of constraining rogue AI through the use of the simulation argument, so thanks for raising this point though.
----- Original Message -----
From: Rolf Nelson
To: sl4@sl4.org
Sent: Sunday, October 21, 2007 1:13 PM
Subject: Re: Ethical experimentation on AIs
Thanks for the link to your paper, Peter. Your paper states that we are likely to be living in a simulation, and therefore...
> Long range planning beyond... [2050] would therefore be futile.
Did you mean to only say that self-motivated long range planning would be futile? I would argue that altruistic long-range planning is not necessarily affected by the analysis. Example:
(Making up figures here, the actual figures don't matter much to the analysis) Suppose that every non-simulated civilization produces, on average, self-aware simulations equivalent to 10^100 civilizations, .001% of which are flawless ancestor simulations. A basic vanilla Simulation Argument would say that the odds are 10^95 to 1 that you live in a simulation. However, any decisions you make will affect 10^100 simulated civilizations. So in basic vanilla utilitarianism, your actions are still dominated by a factor of 10,000 to 1 by considerations of "what should I do if we do *not* live in an ancestor simulation?"
Of course there are arguments against basic vanilla utilitarianism, when large numbers are concerned. But, I would argue that for every valid argument that "basic utilitarianism shouldn't apply to large numbers because it produces silly results", there's an equal argument that "the basic Simulation Argument shouldn't apply to large numbers because it produces silly results."
(Caveat: I don't believe in the Simulation Argument. For example, if it comes down to a choice between something like Wei Dai's UDASSA, or believing I live in a simulation, I consider UDASSA more likely. That said, I consider the Simulation Argument a deep and noteworthy argument, and UDASSA is the only model I know of that might allow an escape that I personally find "satisfactory". I'm still holding out, though, that post-Singularity when we have centuries to leisurely think about it, we'll be able to come up with a better model.)
On 10/20/07, Peter S Jenkins <peterjenkins@rogers.com> wrote:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=929327
Here is a link to my paper on this issue that was mentioned in the NY Times last August -- comments welcome
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT