Quining (was Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases))

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 09 2006 - 12:49:52 MDT


Martin Striz wrote:
>
> That's an interesting gimmick, but a quine has no internal model of
> itself. My point was that, just as we often can't predict our own
> future actions because we are oblivious to the substrate level action
> of our minds, an AI won't be able to simultaneously model of its
> substrate level activity, so there will be some lack of information,
> and some error.

An AI with random-access memory *is* a complete internal model of
itself. Why should an AI bother to quine itself into its scarce RAM,
when two copies contain exactly the same information as one? What good
does it do to model yourself perfectly? What more does it tell you than
just being yourself?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT