From: mike99 (firstname.lastname@example.org)
Date: Sun Sep 15 2002 - 16:26:11 MDT
> -----Original Message-----
> From: email@example.com [mailto:firstname.lastname@example.org]On Behalf
> Of Ben Goertzel
. . . .
> While I don't feel at all certain that creating a human-friendly
> AI will solve everything, I still feel this is the path most likely to
> succeed. This is because I am a bit cynical about human neural
> We may see a tendency toward more advanced consciousness, but
> we will see a tendency toward more advanced and more easily deployed
> massively destructive weapons technology (bio weapons and others).
> Unfortunately I'm afraid the weapons progress may outpace the
> progress, and so I'm hoping in my gut (and working to ensure that) the AI
> progress will outpace either...
> -- Ben G
Much though I would prefer to agree with Samantha here about the potential
for spiritually-based transformation of consciousness on a mass scale, I am
compelled by my knowledge of history and by my personal experience in this
realm to conclude that any such transformation will be small-scale and
individual. Like Ben, I think our human shortcomings--what Stewart Brand
calls our "human cussedness"--as being hard-wired artifacts of our
evolutionary origins. We need transhuman transformation to overcome these on
a mass scale. (One or two Gandhis we can get now, but that's not enough.)
And, in my opinion, that needed transhuman transformation will not transpire
without some essential tools, the most important of which is Friendly AI.
There are no guarantees that we will succeed in this endeavor. But what
choice do what have except to try? The alternative is either to go on as
things are now until humanity kills itself (or dies off), or else to
transcend into transhumanity via a positive Singularity.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT