Re: Threats to the Singularity.

From: Michael Roy Ames (
Date: Sat Jun 15 2002 - 23:06:09 MDT

Ben Goertzel wrote:
> > However, I think it's pretty likely that intelligent software WILL be
> > than humans, due to reasons Eliezer has pointed out nicely in his
> > We have an evolutionary heritage that makes it really tough for us to be
> > wise, and there seems to be no reason why intelligent software would
> > any similar problem.
> >

Samantha replied:
> It is not at all clear that not having an evolutionary history
> or processing more information faster leads to wisdom or
> "better" for sufficient values of "better". How will we even
> evaluate the question? What are the criteria and how do we know
> they are the correct criteria? I think we should be very sure
> of the answers to such questions since it is nothing less than
> the survival of the human race that is at stake.

I agree with Samantha on this point: Super Intelligence does not equate to,
or converge to Wisdom. Intelligence can be used to build/obtain wisdom, but
that doesn't just *happen* by default... at least I have never encountered a
convincing argument that it does. There is zero evidence from the our
single data point (humanity) that intelligence automatically converges on
wisdom, or altruism, or selfishness for that matter. I would profoundly
*hope* that it does by default, but hoping isn't good enough when we are
dealing with existential risks. Anyone who cares about their own future
(and that of everyone else too) should do everything they can to face the
problem squarely, and try to solve it.

The problem: How to arrange things so that an SI builds/obtains a high level
of wisdom.

And the really dodgey part is how to define wisdom. makes a pretty good start at this. But its just a
start. Defining Friendliness content is going to be a hell of a job.

Michael Roy Ames
Ottawa, Canada.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT