RE: How hard a Singularity?

From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Mon Jun 24 2002 - 13:09:46 MDT


Ben Goertzel wrote:

Before human-level AI is achieved, government won't care about the
pertinent
AI research; after it's achieved,

### After HL-AI is achieved and just one copy gets on the net, the horse is
out of the barn - nothing short of dismantling the Internet and using the
full force of the world government will stop hackers from turning your nice
AI into a monster (just for the heck of it). Then the AI can copy itself,
multiplying its capabilities by orders of magnitude without even a bit of
self-enhancement, and overpower humanity by sheer numbers in a few hours
(assuming transmissibility over the net).

The only way to avoid it (AISI) is to install millions of friendly AI's to
take over the living space where unfriendly AI could undergo Darwinian or
Lamarckian evolution.

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT