From: Edwin Evans (firstname.lastname@example.org)
Date: Sun Oct 07 2001 - 17:20:02 MDT
Creating a simulation is one of the most dangerous things an AI could
do. Conceivably, it could simulate pain and suffering (1) on a
hyperastronomical scale. Whereas I find this very unlikely (2), the
shear magnitude of the disaster dictates that people trying to build
superpowerful AIs should be concerned about this danger. I think the
only thing that can justify taking this risk is the goal of creating
positive benefits of a similar magnitude and creating net positive
objective value (3). If it wasn't for the risk of losing the positive
benefit, we should be patient beyond any personal and ordinary human
suffering to reduce the risk of this disaster. Fortunately I think the
odds are heavily in our favor. Unfortunately, we cannot risk waiting too
long to try to push them more in our favor.
(1) By simulated suffering, I only mean that the suffering is occurring
within the computer "box" as opposed to the AI changing the external
world and making outsiders suffer. If it's inside the box, we might be
completely oblivious to it. The other "most dangerous thing" is the AI
causing unimaginable suffering outside the box.
(2) Us creating a simulation that creates enormous suffering is
(logically) more likely than the possibility that we're living (as one
node) in a simulation resulting from a botched Singularity that ends up
itself spawning similar simulations *if* it is true that there is more
suffering than joy in our universe's existence.
(3) Objective Value: Value that is positive because it really is, rather
than because some entity cares about it or thinks it is valuable.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT