From: Brian Atkins (brian@posthuman.com)
Date: Sun Jun 30 2002 - 10:53:47 MDT
Ben Goertzel wrote:
>
> > Why not just agree, here and now, to have it ready 6 months before?
>
> Ummm -- because "6 months" is an arbitrary figure set by you because it is
> half the period of the Earth's rotation around the sun?
>
> I am going to use my own common sense here. I will release this
> documentation a reasonable period before we have a complete system ready to
> roll, for you and other interested parties to comment on.
6 months for me was like the bare minimum needed to allow everyone to
pick through your plans. Actually I'd prefer it be a longer period than
that. What does your common sense tell you? Do you have a time period in
mind?
>
> > > > if they come back to you and say "The .01%
> > > > chance we have
> > > > calculated that your AI will go rogue at some point in the far
> > > > future is too
> > > > much in our opinion. Pull the plug." you will pull the plug?
> > >
> > > In that exact case, Brian, it would be a hard decision. A .01%
> > chance of an
> > > AI going rogue at sometime in the far future is pretty damn small.
> > >
> > > What I'd really like the experts for is to help arrive at the
> > .01% figure in
> > > the first place, actually...
> >
> > So at this point, you can't answer my question? I guess it is one of those
> > things best left to the heat of the moment :-)
>
> What would your answer be, if the same question were aimed at you regarding
> your own Ai project?
>
> And once you tell me your answer, why should I believe you?
Clearly, coming up with better ways to measure these risks and make
decisions upon the measurements is something we all still need to work on.
I think if I was presented with such a figure, I would also want to compare
it to a bunch of humans that had been similarly tested. If the figure for
the AI was significantly less risky than the humans, then I would argue
pulling the plug makes no sense. If the AI was higher than any of the
humans then I would either pull the plug or if possible work on revising
the AI to lower the risks further.
I think SIAI is trustable on this since we have been the leaders when it
comes to these issues, when it comes to stressing lowering risks, and
when it comes to publicly publishing our plans and discussing them.
>
> > > A consensus among a large committee of individualists is not
> > plausibly going
> > > to be achieved on *any* nontrivial issue.
> > >
> >
> > What if they did?
>
> If a diverse committee of transhumanist-minded individuals agreed that going
> ahead with a Novamente-lauchned singularity was a very bad idea, then, I
> would not do it.
>
> [Unless there was some weird extenuating circumstance, like all the
> committee members being brainwashed, or paid off, etc.]
>
> However, this is not the same as your last question. Because a diverse
> committee of transhumanist-minded individuals would be incredibly unlikely
> to say "The .01% chance we have calculated that your AI will go rogue at
> some point in the far future is too much in our opinion. Pull the plug."
> This statement bespeaks a lack of appreciation of the possibility that the
> human race will destroy itself *on any given day* via nuclear or biological
> warfare, etc. It is not at all the kind of statement one would expect from
> a set of Singularity wizards, now is it?
Right, we need some way to compare between them all.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT