From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 29 2002 - 16:47:27 MDT
(I'm obviously late to the party with my comments, but what the heck)
Ben Goertzel wrote:
>
> However, it may be even MORE dangerous to fool oneself into believing one
> has adequately grappled with the Friendliness issue prior to creating an
> infrahuman AGI.
> Here's the thing... as clarified in the previous paragraphs I just typed, we
> *do* have a Friendliness goal built in, we're just sure yet what the best
> way is to do this. And we're not willing to fool ourselves that we *are*
> sure what the best way is....
I want to point out the seeming implication of these quotes is the
exact opposite of what I feel SIAI currently believes. We fully expect to
find problems with our ideas as actual testing goes on. Our experiment
protocol will not for instance allow our prototypes access to the Internet
because, hey, we might be wrong.
Do you plan any "containment" features as part of your protocol?
Again, I ask: do you plan to publish publicly any kind of basic description
of your experimental protocol, and how it works at every step of your
AI design and testing to lower risks? It doesn't have to be now, but I
think it should be available at a bare minimum 6 months before you
expect to have your full code up and running.
> >
> > I assume that if you get your working infrahuman AI, and are unable to
> > come up with a bulletproof way of keeping it "Friendly", you will turn it
> > off?
>
> Not necessarily, this will be a hard decision if it comes to that.
>
> It may be that what we learn is that there is NO bulletproof way to make an
> AGI Friendly... just like there is no bulletproof way to make a human
> Friendly.... It is possible that the wisest course is to go ahead and let
> an AGI evolve even though one knows one is not 100% guaranteed of
> Friendliness. This would be a tough decision to come to, but not an
> impossible one, in my view.
So, since nowadays you are talking about having some kind of committee make
the final decision, if they come back to you and say "The .01% chance we have
calculated that your AI will go rogue at some point in the far future is too
much in our opinion. Pull the plug." you will pull the plug?
Higgins seems to want "hundreds or thousands of relevant experts" to agree
that it is ok for you to "push the big red button". Are you ok with that?
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT