Re: SIAI's flawed friendliness analysis

From: Brian Atkins (brian@posthuman.com)
Date: Sat May 17 2003 - 19:14:21 MDT


Bill Hibbard wrote:
> chapter. It will be impossible for regulation to prevent all
> construction of unsafe AIs, just as it is impossible to prevent
> all crimes of any kind. But for an unsafe AI to pose a real
> threat it must have power in the world, meaning either control
> over significant weapons (including things like 767s), or access
> to significant numbers of humans. But having such power in the
> world will make the AI detectable, so that it can be inspected
> to determine whether it conforms to safety regulations.

We are worrying about an UFAI that has reached the level of intelligence
to be dangerous, yes? Somewhere above human level. AND, it has at least
one very willing human servant at its disposal. Even IF (a huge and
quite unlikely if IMO) it can't use the Net undetected to get what it
needs done, the human will do its bidding. I'm sorry, but your scheme
described above looks rather silly in the face of such an adversary. The
current government can't even catch lowly human-intelligence-level
terrorists very well.

Remember, a being with such an intelligence level may very well not
pursue power through the kinds of traditional means you may be
imagining. It could for instance, if it was good enough, come up with a
plan to build some working molecular nanotech and from there do whatever
it wants. And it might be able to execute such a plan in an undetectable
fashion, barring a 100% worldwide transparent society.

>
> The danger of outlaws will increase as the technology for
> intelligent artifacts becomes easier. But as time passes we
> will also have the help of safe AIs to help detect and
> inspect other AIs.
>

Even in such fictional books as Neuromancer, we see that such Turing
Police do not function well enough to stop a determined superior
intelligence. Realistically, such a police force will only have any real
chance of success at all if we have a very transparent society... it
would require societal changes on a very grand scale, and not just in
one country. It all seems rather unlikely... I think we need to focus on
solutions that have a chance at actual implementation.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT