Re: An essay I just wrote on the Singularity.

From: Eliezer S. Yudkowsky (
Date: Sun Jan 04 2004 - 16:19:38 MST

Samantha Atkins wrote:
> In humans I note that self-improvement and improvement of tools is not
> a task receiving high priority relative to working toward other goals.
> I am curious whether an AI would need explicit goals toward
> self-improvement or would naturally hit upon it as conducive to its
> supergoals and make it a sufficiently high priority.

That's not an "or" question; you could *suggest* to a young AI that it
spend time on self-improvement, while still representing the goal as
strictly instrumental. This AI would neither hit upon the goal
spontaneously, nor represent it as an explicit intrinsic utility.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT