From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Jan 25 2005 - 13:58:34 MST
> 1. Usually the lab begins using strong protocols _after_ the organism
> has already shown it is dangerous - but with an AGI this may be too late
> to wait before applying the strongest safety protocols. Bio labs have
> the luxury of fooling around with strange totally unknown things using
> low safety protocols initially, because if something does go wrong it
> won't be the end of the world. With an AGI, it's probably not as simple
> as quarantining a lab or small geographic area when something bad happens.
IMO, the odds of current biology experiments unexpectedly leading to a
humanity-destroying plague are higher than the odds of current AI systems
unexpectedly leading to a hard takeoff. Fortunately both odds are very low.
If I set Novamente aside and looked at the general rate of progress in
bioscience vs. AI, I'd have to say that the bio-danger looks more likely to
increase in the next 10 years than the AI-danger. Because bioscience has
been progressing amazingly fast whereas AI has been rather stagnant for a
Given my knowledge of Novamente, however, my assessment is that the
near-future AI risk is actually larger, because I can see that -- in spite
of the slow progress in AI in the past -- it is plausible for someone to
create a superhuman AI within the next few years, if they use the right
ideas and are adequately funded.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT