Re: JOIN: Joshua Fox

From: Richard Loosemore (
Date: Tue Feb 07 2006 - 08:13:49 MST

Joshua Fox wrote:
> Greetings,
> I read Vinge's fiction years ago and enjoyed it. Only recently, however,
> I read the works of Yudkowsky and Kurzweil, and the idea clicked. I get
> it. SL4.
> Details about me are at As my initial
> contribution to the dialog, let me offer this:
> Singularity theory, in order to be intellectually honest, takes a hard
> look at all possible objections. Though we can try to consider all
> objections to details of the theory, we have to be sure that we are not
> deluding ourselves. One way to understand how the whole theory might be
> wrong is to find historical parallels to the meta-features of the theory.
> Socialist theory of the late 19th Century and Singularity theory of the
> early 21st both believe that inevitable forces of history take the world
> through successively more evolved phases, which must inevitably
> culminate--within a few decades of the time of the theorizing--in an
> ultimate Utopian phase. Nineteenth Century Socialist philosophers
> honestly thought they had a firm _scientific_ basis, and they were
> wrong. How can we be sure that we are different?
> Joshua


In general, I agree with other comments about your question: those
Nineteenth Century Socialist philosophers used the word "scientific"

Personally, I base my opinions about the Singularity purely on the
development of Artificial General Intelligence (AGI). I think that
current predictions about when "human-level" AGI will become a reality
are based on extrapolations from known techniques, and are deeply
pessimistic because I don't think a known technique is what will make it
happen. I put little stock by Kurzweil's analysis of historical trends,
I am afraid: this is good background, but there is no reason at all why
those beautiful exponential curves of *general* technological progress
should not simply hit a wall.

I also disagree with many about the idea of Unfriendly AI (UFAI). Some
talk as if this is almost inevitable, and say that we will have to work
like crazy to avoid it. I think they base this UFAI-inevitability idea
on their own particular take on how to build an AGI, and I dispute their
methods. In short, I am of the opinion that the approach to AGI they
espouse is going to lead to an AGD (Artificial General Dumbtelligence),
which will never reach the level of human intelligence even after many
more decades of painstaking work, and hence will never be a threat.

My position is (ahem) hotly disputed in some quarters.

Richard Loosemore.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT