Re: Military Friendly AI

From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 29 2002 - 18:47:37 MDT


Ben Goertzel wrote:
>
> hi,
>
> >
> > Do you plan any "containment" features as part of your protocol?
> >
>
> As already stated on this list several times, we intend to give Novababy
> read but not write access to the Internet, at first, until a lot of study
> has been done.

Please describe how this works.. as I'm not sure you know, simply sending
a request to a vulnerable web server formatted in the proper way will
result in the potential for a virus or whatever to be implanted.

>
> > Again, I ask: do you plan to publish publicly any kind of basic
> > description
> > of your experimental protocol, and how it works at every step of your
> > AI design and testing to lower risks? It doesn't have to be now, but I
> > think it should be available at a bare minimum 6 months before you
> > expect to have your full code up and running.
>
> Yes, we will publish this publicly. I can't promise it will be 6 months
> before the codebase is complete, but it very likely will be, as this is
> something I'll work on after the Novamente book work is done, and writing
> the Novamente book should take less time than completing the codebase!

Why not just agree, here and now, to have it ready 6 months before? If it
turns out to take longer than the coding, then you agree not to actually
start running the code for a period of 6 months after you publish the
protocol and related documentation?

>
> > So, since nowadays you are talking about having some kind of
> > committee make
> > the final decision,
>
> Actually, as I said very many times on this list, what I thought was a good
> idea was an *expert advisory board*, intimately involved with the project
> when it reaches near-takeoff stage. This does not imply that the advisory
> board has final decision making power.

Why not give them the power if you truly believe they have a higher level
of combined wisdom than yourself?

>
> > if they come back to you and say "The .01%
> > chance we have
> > calculated that your AI will go rogue at some point in the far
> > future is too
> > much in our opinion. Pull the plug." you will pull the plug?
>
> In that exact case, Brian, it would be a hard decision. A .01% chance of an
> AI going rogue at sometime in the far future is pretty damn small.
>
> What I'd really like the experts for is to help arrive at the .01% figure in
> the first place, actually...

So at this point, you can't answer my question? I guess it is one of those
things best left to the heat of the moment :-)

>
> > Higgins seems to want "hundreds or thousands of relevant experts" to agree
> > that it is ok for you to "push the big red button". Are you ok with that?
>
> I am not OK with that, but I believe he backpedaled on that particular
> assertion.

He backed away from "thousands", but not "hundreds" as far as I can tell.

>
> A consensus among a large committee of individualists is not plausibly going
> to be achieved on *any* nontrivial issue.
>

What if they did?

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT