Re: How hard a Singularity?

From: Eliezer S. Yudkowsky (
Date: Sat Jun 22 2002 - 14:03:50 MDT

Eliezer S. Yudkowsky wrote:
> Eliezer S. Yudkowsky wrote:
> >
> > By hypothesis, the AI just made the leap to human-equivalent smartness.
> > We know from evolutionary experience that this is a highly significant
> > threshold that opens up a lot of doors. Self-improvement should be going
> > sixty at this point.
> Eliezer, I think you are mistaken about this. The shift from chimpanzee
> to human was probably nothing like the shift from infrahuman to
> human-equivalent AI will be. The two development trajectories are too
> different to draw analogies between them. In particular, the sudden
> invention of "general intelligence" supported on chimpanzee substrate
> may have no analogue in the development path of an AI.

Conceded, Eliezer. However, I still think that jumping to
"human-equivalent" smartness should be worth something in opened doors - in
terms of which parts of the system are self-understandable if nothing else.
  (I.e., the AI was designed by humans, therefore human smartness is a
significant internal threshold.) But I could be mistaken, and at any rate
you are correct that we know nothing "from evolutionary experience".

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT