RE: guaranteeing friendliness

From: Michael Wilson (
Date: Fri Dec 02 2005 - 17:15:55 MST

Herb Martin wrote:
> Now you have indirectly made my point about friendly AI:
> Once we reach the Singularity we cannot be assured of
> a 'friendly AI' or even have any control or effect on
> the developing and evolving AI.

We can only be assured of it if we go out and ensure that it
happens. This isn't passive futurology, virtually everyone
here has /some/ opportunity to improve the odds (however
slightly) of a humanly desirable Singularity. It's true that
there's going to be a significent chance of failure in that
goal right up to (and beyond) the point that human intelligence
is surpassed, but so far the goal itself appears to be

> Guaranteed control is an illusion.

This is definitely not true in the general case. The only
reason this even sounds vaugely plausible is because we hear
so much about (and attach so much emotional importance to)
cases of humans trying to control other humans, which it's
true doesn't tend to work too well.

> Much like believing you can keep terrorists from taking
> down an airplane by taking away sewing scissors from
> ordinary passengers.

This is an astoundingly bad (attempt at) an analogy, to the
point of being actively misleading. Aside from the attempt
to import random political and emotional baggage, and the
usual reasons why it's fairly futile to try and evaluate what
wildly transhuman intelligences can and can't do, the task of
preventing general intelligences with harmful goal systems
self-improving to a dangerous level is nothing like an obscure
physical security issue faced by some contemporary hominids.

 * Michael Wilson

How much free photo storage do you get? Store your holiday
snaps for FREE with Yahoo! Photos

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT