Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 06 2006 - 15:16:34 MDT


Martin Striz wrote:
>
> I think the argument is that with runaway recursive self-improvement,
> any hardcoded nugget approaches insignificance/obsolesence. Is there
> a code that you could write that nobody, no matter how many trillions
> of times smarter, couldn't find a workaround?

Martin,

Did you read the book chapter?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT