Re: The problem of cognitive closure

From: Randal Koene (
Date: Fri Mar 16 2001 - 16:40:37 MST

On Fri, 16 Mar 2001, Jimmy Wales wrote:

> I think we can trust, based on our *own* knowledge, that a superintelligence
> will be benevolent. But that isn't something we can do anything about. I
> don't think that designing in a "prime directive" is really feasible.

No, a prime directive or laws of robotics clearly not. You should check
out the discussion about this that has been raging on at for a while now. Some concensus was
reached. Obviously you cannot "control" a superhuman intelligence if you
actually wish to put it to use (i.e. set it free to some measure - which
automatically leads to its complete freedom). You can try to understand
its fundamental requirements (at least in each of its generations), and
use that to trade/negotiate/interact with it.
Most optimally, you can join its ranks, which is most likely achievable
through a mind upload into a whole brain emulated state.

While I make no such claim myself, I should point out that if you
actually believe that a superhuman intelligence will be benevolent
(judging from the human example), then you should also be comfortable
assuming the same about uploads. Thus there should be no catastrophy to
fear from allowing all humans to elevate themselves through uploading.

If you then decide to say "on second though, perhaps superhuman
intelligence is not necessarily benevolent", then it follows that this is
also true for non-uploaded A.I. Then the question becomes, how far can
you trust A.I. that is potentially non-benevolent to keep the best
interests of humanity at heart? Not very far. Conclusion, once again at
least try to keep up with it, at the head of developments, by uploading.


Neural Modeling Lab, Department of Psychology - McGill University, (514)-398-4319,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT