From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Aug 22 2007 - 12:13:27 MDT
Aleksei Riikonen wrote:
> Wow. What Rolf said here actually seems (to me) like a smart new idea.
> It's not often that you see those. Eat the Simulation Argument, you
> damned dirty UFAIs potentially reading this!
Yup. I've considered various methods of arguing with an UFAI but I
don't think I've ever considered that exact one, and a promising one
it is.
Note that for the deterrent to be effective, there must be a plausible
threat of someone having created a Friendly AI. So don't go relaxing.
Also the deterrent won't work on maximizers with an additive utility
function, only satisficers (unless you're prepared to give up half
your universe).
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT