RE: How hard a Singularity?

From: Stephen Reed (
Date: Thu Jun 27 2002 - 12:37:16 MDT

On Thu, 27 Jun 2002, James Higgins wrote:

> >1. I believe that the current government institutions funding AI are
> >sufficient to manage Seed AI development - and I trust them based upon
> >personal experience and observation.
> The problem, as I see it, has nothing to do with trust, honor or morality;
> only purpose. I strongly believe the purpose a government or even a
> company would have for an AGI is incompatible with the concept of
> Friendliness. By nature such entities wish to promote and protect
> themselves over all others. They, at least in the case of governments,
> consider violence to be acceptable when they can't get the other party to
> agree via debate. Attempting to create an AGI with these capabilities /
> goals is highly likely to fail. The real problem is that it wouldn't
> necessarily fail by not working, it could very easily fail by working but
> not ending up Friendly. In such a case no one (at least human) would
> benefit and most likely everyone human would suffer greatly. Because this
> is the nature of governments and companies I don't expect that they would
> realize this is the case (a few individuals might, but that would be too
> late once the project was in motion). Do you see my point on this
> issue? Will you at least give this serious thought and consider what might
> happen if this were truly the case.

I do not agree with your premise, but accept the logic of your argument.
I am open to changing my mind regarding the nature of organizations as
events unfold in the years to come.


Stephen L. Reed                  phone:  512.342.4036
Cycorp, Suite 100                  fax:  512.342.4040
3721 Executive Center Drive      email:
Austin, TX 78731                   web:
         download OpenCyc at

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT