Re: Many-worlds (was Re: [sl4] Re: Uploads coming first would be good, right?)

From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Mon Mar 09 2009 - 23:35:59 MDT


I find the thoughts of Hans Moravec on this subject very interesting.
http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

In my own words: All experiences are subjective. The difference between
autobliss and implementing the memory of torture is that the memory will
affect your memories and thereby your actions in our own reality. We can
therefore evaluate it as negative. Autobliss doesnt and we cant evaluate it.

On Mon, Mar 9, 2009 at 3:16 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:

>
> --- On Sun, 3/8/09, Vladimir Nesov <robotact@gmail.com> wrote:
>
> > What if you create a simulation in which you torture and
> > murder 10^100 people? Does it become OK if you erase all the evidence?
>
> That depends on what your ethical model (1) counts as a simulation and (2)
> says about simulated murder and torture and (3) says about undoing your
> actions.
>
> Suppose I claim that running autobliss (
> http://www.mattmahoney.net/autobliss.txt ) with 2 negative arguments
> (simulating negative reinforcement regardless of the action of the agent) is
> 10^-20 as evil as torturing and murdering a human (or pick a number > 0).
> Then running 10^120 copies would be as evil as torturing and murdering
> 10^100 people. I can write an equivalent but more efficient program that
> produces the same output for the same input and run it on my laptop. Instead
> of reporting that it felt 1000 units of pain and died, it reports 10^123
> units of pain and 10^120 deaths.
>
> Is that unethical? If not, then define which Turing machines count as a
> simulation of torture and which don't.
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT