Re: An essay I just wrote on the Singularity.

From: Samantha Atkins (
Date: Sun Jan 04 2004 - 11:37:52 MST

On Sat, 3 Jan 2004 20:29:19 -0600
"Paul Fidika" <> wrote:
> Samantha Atkins wrote:
> >How do we know that one of the primary design
> >goals of the project, recursive self-improvement,
> >will actually be what is built? Huh? Our ever
> >growing body of knowledge gathered and
> >transmitted to each generation is an example of
> >using evolutionary selection alone? This seems
> >rather strained.
> Speaking of which, why is everyone on this list so confident that
> recursive-self-improvement will even work? It seems to be tacitly assumed on
> this list that the amount of intelligence it will take to create a new
> intelligence will increase logarithmically or at least linearly with the
> sophistication of the intelligence to be created, but what is this
> assumption based upon? For example, if X is some objective and numeric
> measure of intelligence, won't it be more likely that it will take X^2
> amount of intelligence (either iterating over a certain amount of time or
> multiple minds working in parallel) to create an X + 1 intelligence? Or
> worse yet, might it take 2^X amount of intelligence to create an X + 1
> intelligence? Perhaps 2^2^X amount of intelligence? If so, then there are
> very definite limits on how far a super-intelligence will be able to go
> before it "runs out of steam," and it might not be all that far.

Well, as I understand it, you aren't building a "new intelligence" but rather have an intelligence able to iteratively improve itself. I don't see any simple way to guesstimate the difficulty of each iteration or when/if the iterations would become too expensive to consider (or of lower priority). In humans I note that self-improvement and improvement of tools is not a task receiving high priority relative to working toward other goals. I am curious whether an AI would need explicit goals toward self-improvement or would naturally hit upon it as conducive to its supergoals and make it a sufficiently high priority.

- samantha

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT