Re: Donate Today and Tomorrow

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Thu Oct 28 2004 - 18:36:20 MDT


Slawomir Paliwoda wrote:
>
> There's no question that SIAI's work on FAI *theory* is important and
> should go on, but unless Seed AI team translates Friendliness into a
> mathematical model and proves that it would be impossible for the entity
> to ever turn unFriendly, we should be extremely nervous about any plans
> to *implement* that theory. The stakes are simply too high.

I was planning to put a mathematical upper bound on the probability of
catastrophic failure within an expected operational lifetime before
replacement. Impossibility is not a word to be used outside the context of
physics.

Of course this just passes the buck of the fundamental safety problem,
which then becomes defining "catastrophic failure" to include all
catastrophic failures. Crying mathematical certainty doesn't help with
this; it just excludes silly solutions that the speaker can't translate
into silly math. If you make a big deal out of whether it's mathematical
or not, you'll just get silly solutions translated into silly math and the
speaker will say: lo, it is math. Rather go on attacking the fundamental
safety problem in its new form, broadening your understanding of failure,
seeing how simple definitions fail in simple loopholes.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT