Re: Safety of brain-like AGIs

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 28 2007 - 10:11:49 MST


>
>
> But we do have one other choice the slave didn't have, we can choose not
> to go, or delay until we know more. I don't think AI can be stopped
> forever, but I think the human race needs to seriously consider holding
> off the 'singularity' and advancing toward it slowly. Some view it as
> a utopia, but this is not the way world history has ever worked. Great
> upheaval is usually very messy. Violently losing the human race
> entirely is more likely than the utopian outcome. A much more slow
> approach may allow evolutionary steps where there is time to grasp the
> next step at each stage.
>

I am not denying the real possibility of a negative outcome, however, I
also don't think that "the way world history has ever worked" is a very
good guide for the way world history is likely to work after superhuman
intelligences come into the picture...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT