From: Samantha Atkins (firstname.lastname@example.org)
Date: Fri Jan 02 2004 - 02:03:50 MST
On Wed, 31 Dec 2003 11:32:51 -0800
Robin Lee Powell <email@example.com> wrote:
> On Wed, Dec 31, 2003 at 12:48:52PM -0500, Ben Goertzel wrote:
> > It's just not true that humans developing strong nanotech will
> > *necessarily* lead to destruction.
> Agreed. However I, personally, see it as far more likely than for
> any previous technology (all one of them: nukes). Mostly because
> you can accidentally create grey goo, but you can't accidentally
> bomb the world to the stone age.
Actually, I doubt very much that you can "accidentally" create gray goo. It would more likely take a good deal of work to create a self-replicating bit of nanotech that dismantled everything but instances of itself to build more of itself and also to be hardy enough under all the various conditions encountered.
What I would consider a more pressing concern is what human beings are likely to attempt to do to one another using either or both of strong AI and nanotechnology. Perhaps most of us simply assume humans are as they are, warts and all, but it is not clear to me that our future can be reasonably assured without working on overcoming some of the more problematic aspects of humans and human society. It is not clear, for instance, that universal physical abundance would mean that everyone has as much as they need to survive and thrive from a physical standpoint. Without reworking current societal/economic biases it is more likely the "haves" would use the technology to increase their differential power and show little interest in using it to benefit the "have nots". It is likely that the same old assumptions of my group/nation/tribe being right and the others "evil" will lead to attempts to wipe out/utterly suppress/oppress the *other* at hyperspeed using the new technology.
FAI is one way out of the quagmire. It might or might not be the only or most likely way.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT