Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases)

From: Eliezer S. Yudkowsky (
Date: Tue Jun 06 2006 - 15:16:34 MDT

Martin Striz wrote:
> I think the argument is that with runaway recursive self-improvement,
> any hardcoded nugget approaches insignificance/obsolesence. Is there
> a code that you could write that nobody, no matter how many trillions
> of times smarter, couldn't find a workaround?


Did you read the book chapter?

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT