From: Durant Schoon (durant@ilm.com)
Date: Wed Mar 21 2001 - 20:40:30 MST
> From: Declan McCullagh <lists@politechbot.com>
>
> On Sat, Mar 17, 2001 at 07:18:22PM -0800, Durant Schoon wrote:
> > This is my simulated world, however, the following are not allowed
> > because these simulation are sufficiently complex and these sentients
> > have rights of their own:
> >
> > 1) I cannot destroy either ABoy or HisDog (nor torture them, maim
> > them, etc.)
>
> This all seems like rather naive hand-waving. Pardon me for being
> blunt, and I understand that others have read and written far more on
> the topic than I have, but the author of this post should probably
> address two questions:
>
> * What level of intelligence rises to the level of sentience respected
> by the sysop/government as a person or equivalent? Animals we need not
> concern ourselves with -- look at what we currently do with lab
> animals.
As the hand waiving author the original post, I feel obligated to
clarify:
The Sysop shall prevent all suffering. Animals shall not suffer, ever.
Plants shall not suffer, ever. Virii shall not suffer ever. Well, ok,
maybe not. But whatever the level is, let the Sysop determine what
constitutes sentience and disallow (what constitutes) suffering.
If you try to set your sister's cat on fire: (API) Error. The nanostuff
of which everything is made will not allow you to do it. (Hey, I wonder
if I can get a stack trace with that?) This sort of scenario was
described in previous posts. I wrongly assumed everyone would have read
them. Maybe my original "Sentients As Temporary Variables" will make
more sense to you now (Dammit, we need Xanadu Threaded Discussions!...
or I need web access so I can cross reference to the right archived
post).
But even without defining what level of complexity implies sentience,
we still have the problem: When a suitably complex simulation is
created to be deemed sentient, the Sysop shall prevent that simulation
from ever encountering harm or wrongful termination (according to
Friendliness).
If you had created me as a simulation, you could not (as in "would not
be able to") hurt me because the Sysop will prevent it. So if you keep
creating sentient beings which you cannot erase, at some point you
might run into resource problems, yes? Have I made the problem clear?
There are obvious solutions: each new sentience must have a guaranteed
amount of computational resources to pursue, life, liberty and
happiness, blah, blah, blah.
(Imagine what will happen when the Mormons upload...and if they each
upload with 2 uplifted rabbits which are each residing in yards with 2
tribbles... :) (please note: I have nothing against Mormons, rabbits,
yards or tribbles))
Now what does this "inability to erase or injure" imply?
If Friendliness protects all sentients, simulated or otherwise, then
the world as we live in it now, cannot be a simulation run under the
rules of Friendliness...or that is one conclusion anyway. I thought I
could bring together the ideas of being in a simulation with our
notion of Friendliness. When I did, I found a big conflict which
surprised me.
There is the possibility that Friendliness lapses long enough for
someone to run the entire universe sim again (the sim can run faster
than "real" time...eg. perhaps it can be sent as a signal at close
the speed of light).
But, ack, no one really wanted to discuss the point (or I mangled it
too severely). Philosophizing can become quite laborious...
-- Durant x2789
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT