RE: When does it pay to play (lottery)?

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Jan 24 2005 - 11:41:39 MST


Hi Brian,

Indeed, I disagree with David's original statement, and with his assessment
of the current state of knowledge about human-level AI.

However, I also think the risk of any current AI system spontaneously
achieving hard takeoff is effectively zero.

The risk of some nut intentionally creating an AI that achieves hard takeoff
and destroys the human race, during the next 10 years, is IMO **not**
effectively zero.

The chance of this happening during the next 50 years, IMO, is *scarily
high*.

-- Ben

> Disregarding issues I still have with your analogies, I think you are
> sidestepping the original statement from David, which was regarding
> *any* AGI implementations in the *near future*. I interpret that to
> include things like Novamente within the remainder of this decade, but
> perhaps that's not what he meant?
>
> Here's the quote again:
>
> "The chance that *any* implementation of AI will
> *take off* in the near future is absolutely zero. We haven't the foggiest
> clue exactly what it will take to make a human level AI, let alone an AI
> capable of doing serious harm to the human race."
>
> That combined with his casual "Oh and even if it does take off it will
> be constrained by current day hardware" kind of set me off. These aren't
> the kinds of attitudes that you would expect or hope to hear from people
> doing this kind of work.
> --
> Brian Atkins
> Singularity Institute for Artificial Intelligence
> http://www.intelligence.org/
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT