From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 21 2004 - 17:43:39 MDT
Hi,
> > I'm also fairly sure that SIAI FAQ #2 or thereabouts should
> be the one
> > I asked earlier and no one has yet answered: namely, how about
> > treating AI in general as a WMD, something to educate people not to
> > think they can build safely and to entice people not to
> want to build?
>
> I've had no luck at this. It needs attempting, but not by
> me. It has
> to be someone fairly reputable within the AI community, or at
> least some
> young hotshot with a PhD willing to permanently sacrifice his/her
> academic reputation for the sake of futilely trying to warn the human
> species. And s/he needs an actual technical knowledge of the issues,
> which makes it difficult.
The academic AI community does not take the possibility of human-level
or superhuman artificial general intelligence very seriously. At least,
it has not done so for a few decades. Slowly, it is starting to come
around to recognizing this possibility again. I guess that in another
10-20 years the academic AI community will start to think about the
dangers of powerful AGI. Of course, someone may already create a
superhuman AI or not by that time-frame, for good or for ill...
-- Ben
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST