From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Wed Aug 22 2007 - 12:13:27 MDT
Aleksei Riikonen wrote:
> Wow. What Rolf said here actually seems (to me) like a smart new idea.
> It's not often that you see those. Eat the Simulation Argument, you
> damned dirty UFAIs potentially reading this!
Yup. I've considered various methods of arguing with an UFAI but I
don't think I've ever considered that exact one, and a promising one
Note that for the deterrent to be effective, there must be a plausible
threat of someone having created a Friendly AI. So don't go relaxing.
Also the deterrent won't work on maximizers with an additive utility
function, only satisficers (unless you're prepared to give up half
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:23 MDT