Re: Hiding AI research from Bad People was Re: OpenCog Concerns

From: William Pearson (wil.pearson@gmail.com)
Date: Mon Mar 24 2008 - 14:49:54 MDT


On 24/03/2008, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
> On Mar 24, 2008, at 5:17 AM, William Pearson wrote:
> > On 24/03/2008, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
> >> Why would they do it secretly?
> >
> > If someone is manifestly on the right track to AI, I can see the
> > military mind treating it the same way as nuclear technology, keeping
> > it as secret as possible to gain an edge and avoid its use by
> > terrorists/unfriendly states. That might mean appropriating it, then
> > quietly quashing the research trying to make it appear as another
> > failed attempt in the litany of AI.
>
>
>
> This is essentially circular reasoning. DARPA et al have shown no
> capacity whatsoever to discriminate between research that is
> "manifestly on the right track to AI" and the thousands of dead ends
> out there. To put it another way, if they *were* capable of making
> meaningful discriminations, they would already know how to build AI
> and they would not need your work.

In many problems, it is a lot easier to see a path is right than to
find a path. E.g. NP problems. AI might be one such.

> In short, it will not be manifestly obvious that you are on the right
> track until you unambiguously produce AI.

If you know this to be the case, then you must know all possible ways
to create AI, thus you must be able to create AI ;)

I think it possible that a research group might unambiguously produce
non-human level before producing human level.

It might be a long slow process from say dog level to human, stopping
through human level with learning difficulties. For all we know that
might be super human learning stumbling blocks, so that we may produce
an intelligence that gets above human level then can't improve itself,
and can't see why it can't improve itself due to some faults in our
theories. A damp squib of an explosion.

Another scenario it might be that a partial theory of AI is
sufficiently water tight and beautiful to convince to engineers to
follow it. Most theories have been ugly hodge-podges or based on
unrealistic assumptions.

I'm not buying AI *having* to spring forth fully formed in the first
iteration of development. It might but there is no guarantee.

  Will Pearson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT