Re: Effective(?) AI Jail

From: sunrise2000@mediaone.net
Date: Sun Jun 17 2001 - 11:42:32 MDT


<snip>

>down in relative terms from an unusually - traditionally anyway - high
>growth rate in the '90s.
>
>Besides that, this whole discussion is moot. It seems likely that an AI
>capable of subverting a human through a vt100 terminal or some other low
>bandwidth connection wouldn't need human cooperation to escape from any jail
>humans would be able to build.
>
>So technically - in this highly contrived scenario - those who worry about
>an AI escaping by coopting human assistance by whatever means are right.
>But I would say that by the time such a thing became possible, the AI would
>already be long gone; or worse, if it be unfriendly, it would be like Visa
>"everywhere we want to be".

No, no, no. This issue is quite pertinent, in fact, critical. I am building
an AI and don't intend to specifically make it friendly. (Plug: Volunteers
to help are always welcome.) I (and AI developers, in general) need to know
at what point in the development process we need to worry about our program
escaping. This thread has strayed a bit from my original intent: designing
one *which is* escape-proof. It feels like there ought to be a way to
observe prototypes without them injuring anyone.

Point of clarification: I am not (now) interested in how to contain an
*unfriendly* AI, just in how to contain a *potentially* unfriendly one.
While containing unfriendly AI may be useful in defense against AI
terrorists, I'm much more interested in containing developmental prototypes.

Dave



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT