Re: Many-worlds (was Re: [sl4] Re: Uploads coming first would be good, right?)

From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Tue Mar 10 2009 - 02:53:50 MDT


Found the passage from Moravec's page I was looking for:

> So is there no difference between being cruel to characters in interactive
> books or video games and people one meets in the street? Books or games act
> on a reader's future only via the mind, and actions within them are mostly
> reversed if the experience is forgotten. Physical actions, by contrast, have
> greater significance because their consequences spread irreversibly. If past
> physical events could be easily altered, as in some time-travel stories, if
> one could go back to prevent evil or unfortunate deeds, real life *would*acquire the moral significance of a video game. A more disturbing
> implication is that any sealed-off activity, whose goings on can be
> forgotten, may be in the video game category. Creators of hyperrealistic
> simulations---or even secure physical enclosures---containing individuals
> writhing in pain are not necessarily more wicked than authors of fiction
> with distressed characters, or myself, composing this sentence vaguely
> alluding to them. The suffering preexists in the underlying Platonic worlds;
> authors merely look on. The significance of running such simulations is
> limited to their effect on viewers, possibly warped by the experience, and
> by the possibility of ``escapees''---tortured minds that could, in
> principle, leak out to haunt the world in data networks or physical bodies.
> Potential plagues of angry demons surely count as a moral consequence. In
> this light, mistreating people, intelligent robots, or individuals in
> high-resolution simulations has greater moral significance than doing the
> same at low resolution or in works of fiction not because the suffering
> individuals are more real---they are not---but because the probability of
> undesirable consequences in our own future is greater.
>
> On Tue, Mar 10, 2009 at 6:35 AM, Petter Wingren-Rasmussen <
petterwr@gmail.com> wrote:

> I find the thoughts of Hans Moravec on this subject very interesting.
> http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html
>
> In my own words: All experiences are subjective. The difference between
> autobliss and implementing the memory of torture is that the memory will
> affect your memories and thereby your actions in our own reality. We can
> therefore evaluate it as negative. Autobliss doesnt and we cant evaluate it.
>
>
> On Mon, Mar 9, 2009 at 3:16 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
>
>>
>> --- On Sun, 3/8/09, Vladimir Nesov <robotact@gmail.com> wrote:
>>
>> > What if you create a simulation in which you torture and
>> > murder 10^100 people? Does it become OK if you erase all the evidence?
>>
>> That depends on what your ethical model (1) counts as a simulation and (2)
>> says about simulated murder and torture and (3) says about undoing your
>> actions.
>>
>> Suppose I claim that running autobliss (
>> http://www.mattmahoney.net/autobliss.txt ) with 2 negative arguments
>> (simulating negative reinforcement regardless of the action of the agent) is
>> 10^-20 as evil as torturing and murdering a human (or pick a number > 0).
>> Then running 10^120 copies would be as evil as torturing and murdering
>> 10^100 people. I can write an equivalent but more efficient program that
>> produces the same output for the same input and run it on my laptop. Instead
>> of reporting that it felt 1000 units of pain and died, it reports 10^123
>> units of pain and 10^120 deaths.
>>
>> Is that unethical? If not, then define which Turing machines count as a
>> simulation of torture and which don't.
>>
>> -- Matt Mahoney, matmahoney@yahoo.com
>>
>>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT