From: Peter de Blanc (firstname.lastname@example.org)
Date: Mon May 09 2005 - 19:41:13 MDT
On Mon, 2005-05-09 at 21:42 +0100, Russell Wallace wrote:
> Only if by "fail safely" you mean "Eliezer will be wise enough to see
> it's not a winner before go-live day". If actually implemented, I
> think it is highly likely to fail in such a way as to eliminate all
> sentient life.
If the extrapolated volition of humanity fails to converge, then there's
nothing to implement, and the FAI should do nothing. I use the term FAI
because you can't do CV with a generic RPOP straight from the box -
there has to already be some friendliness content to prevent the FAI
from turning the universe into a volition-extrapolating machine. It has
to *want* to fail safely.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT