Singularity function.

From: Michael Wilson (
Date: Thu Aug 28 2003 - 14:24:24 MDT

Hello. My discovery of this community and the associated
writings on practical aspects of the singularity was less
than a month ago; due to time constraints I'm still building
a well-justified stance on these issues. However I do have one
pressing question which I have not succeeded in locating an
answer to elsewhere. There is a mention of it on the
'singularity holes' SL4 wiki page
( and
several occurrences of superficially similar arguments about
self-improvement rate constraints in the archives, but the
following question was not addressed in detail.
The singularity argument assumes that the technological
development of intelligence capable of direct
self-improvement will result in the next phase of the
exponential increase in intelligence seen over the lifespan
of the local universe so far. As I see it the rate of
intelligence change is determined by some function that takes
the resources available and the desired step in intelligence
as parameters and generates a probability distribution
describing the likelihood of the intelligence increase
occurring over time. To date the rate of increase has been
improving as biological evolution has become more efficient,
followed by a recent sharp spike as cultural evolution became
possible (and rapidly more effective). Both these processes
would appear to have sharp limits on their ultimate
effectiveness; the question of whether they might work
together to produce hyperintelligence is rendered moot by
the (likely endemic) high instability of minimally sentient
organic intelligence society over evolutionary meaningful
The heart of the singularity is clearly the sudden creation
of a feedback loop that makes the current level of
intelligence an important parameter of the function giving
the probability/time distribution for the creation of an
arbitrarily higher intelligence. When this function is
differentiated with respect to time it rapidly becomes the
only significant parameter. The critical question is
therefore 'what can we say about the relationship between
the current intelligence level and the difficulty (median of
the probability/time curve) of reaching the next incremental
level?'. The first part breaks down into a secondary
relationship between current intelligence and the total
computing power available (ops/second); the premise of
self-improving intelligence implies that with increasing
intelligence these operations will also be more efficiently
directed. In other words at any one point on the graph
developing intelligence is a non-deterministic O^n
computational problem where n is inversely proportional to
the current intelligence level.
If that assumption is correct then anything we can say about
the function relating base difficulty O to target
intelligence level I is critically important to determining
if singularity will occur. I'm assuming that it may well
change at arbitrary transition points as intelligence
increases; boundaries similar to the development of tool use,
cultural evolution and complete self-awareness (seed AI).
If the relationship is linear then geometric takeoff will
occur; a hard takeoff towards singularity if that's the
initial relationship. If the relationship is quadratic then
growth will be exponential; possibly a hard or a soft
takeoff depending on the exponent, but singularity all the
same. If the relationship is cubic then growth is linear;
a posthuman era certainly, but not a singularity as I
understand it. If the relationship gets much worse than
cubic at any point intelligence plataeus until computational
power increases enough to get things back into gear; if
it can't be increased fast enough then progress halts and
singularity doesn't occur.
I intuitively feel that it should be possible to place some
bounds on this function using complexity theory and/or
existing data points (evolution produced humanity in a few
billion years, humanity produced artificial intelligence
equivalent to insects within decades of the computing power
becoming available). However I don't have the relevant
training to follow this line of reasoning much further.
Following review of the archives and wiki pages it seems
likely to me that someone here has thought this through
and arrived at a more complete answer, so I would
appreciate comments on the validity and implications of
the above.
 * Michael Wilson

Want to chat instantly with your online friends? Get the FREE Yahoo!Messenger

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT