From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Oct 07 2001 - 18:10:34 MDT
Edwin Evans wrote:
>
> Creating a simulation is one of the most dangerous things an AI could
> do. Conceivably, it could simulate pain and suffering (1) on a
> hyperastronomical scale. Whereas I find this very unlikely (2), the
> sheer magnitude of the disaster dictates that people trying to build
> superpowerful AIs should be concerned about this danger. I think the
> only thing that can justify taking this risk is the goal of creating
> positive benefits of a similar magnitude and creating net positive
> objective value (3).
I agree that this is the best reason for creating SI. However, it is not
the only conceivable reason. For example, it might be the case that it is
not a question of *whether* to take the risk, but when and who; in this
scenario it may be logical, although perhaps not exactly the smartest
thing to do, to trade off a 70% risk against a 90% risk. Of course in
this case another logical course of action is to try and wipe humanity off
the planet, counting the whole thing as a bad deal and leaving the
creation of Singularities to some other, more organized species. This is
also not perhaps the smartest thing to do. Call me a starry-eyed
impractical idealist, but I tend to think that intelligence is the tool
that lets us avoid these dilemmas in the first place...
> If it wasn't for the risk of losing the positive
> benefit, we should be patient beyond any personal and ordinary human
> suffering to reduce the risk of this disaster. Fortunately I think the
> odds are heavily in our favor. Unfortunately, we cannot risk waiting too
> long to try to push them more in our favor.
Maybe the best summary is that the fundamental odds determine whether or
not it's a good idea to have a Singularity some time in the next fifty
thousand years, while it's the timing issue relative to other technologies
and other projects - and, of course, the ongoing planetwide death rate -
which make it a good idea to do it some time in, say, the next decade.
For the record, I would think the chance of a CFAI-architecture failure of
Friendliness resulting in an exponential number of ancestral simulations
should actually be fairly small (no obvious path from point A to point B),
and like many specific negative scenarios can be reduced in probability
even further by the use of ethical injunctions and negative anchors.
Because of the Bayesian arguments, I'm always on the lookout for a way
that a malfunctioning Singularity (CFAI architecture or not) could wind up
creating a large number of ancestral simulations - or, given my own
experienced identity, a large number of programming-team simulations - but
I haven't found any particularly alarming-looking ones as yet. (Not that
there's much I could do about it if I did spot one... by hypothesis, if
the Bayesian argument is valid, it would be too late to do anything about
it.)
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT