Re: Quining (was Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases))

From: Martin Striz (
Date: Fri Jun 09 2006 - 13:10:40 MDT

On 6/9/06, Eliezer S. Yudkowsky <> wrote:
> Martin Striz wrote:

> An AI with random-access memory *is* a complete internal model of
> itself. Why should an AI bother to quine itself into its scarce RAM,
> when two copies contain exactly the same information as one? What good
> does it do to model yourself perfectly? What more does it tell you than
> just being yourself?

Wouldn't it be smart to test designs in a model before you dedicate
them to your source code, rather than willy-nilly rewriting the stuff
without being sure empirically what the changes would do? That seems
even more dangerous.

Either way, my point stands: you can't guarantee that AIs won't make mistakes.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT