Hiding AI research from Bad People was Re: OpenCog Concerns

From: William Pearson (wil.pearson@gmail.com)
Date: Sun Mar 23 2008 - 17:26:55 MDT


On 23/03/2008, Daniel Burfoot <daniel.burfoot@gmail.com> wrote:
>
>
> On Sun, Mar 23, 2008 at 5:19 PM, Edward Miller <progressive_1987@yahoo.com>
> wrote:
> > I am wondering if it is possible to specify no
> > military use in OpenCog's license. Certainly DARPA
> > would love to get their hands on it if it is useful,
> > and maybe we ought not to let them.
>
> I think this is an excellent idea. The issue is deep and difficult, of
> course, but that doesn't mean we shouldn't at least try to do something
> about it.
>
> I imagine it would encourage greater participation in the project as well.
>
> I would like these issues to be more widely discussed. I wonder if one could
> achieve a critical mass of scientists who would sign an agreement to refuse
> DARPA funding, refuse to collaborate with DARPA-funded researchers, and to
> refuse to cite papers supported by DARPA (and also DARPA's counterparts in
> other countries).
>

Sorry to put a downer on this idea, but it smacks of naivety somewhat.
The military of any country (and DARPA is not the only one that you
might be worried about) are not going to respect copyright if it harms
their perceived defensive capability. These are the people that have
secrets upon secrets. They will just use your code, secretly.

The two ways you can go are is hope that friendly singletons can get
from human to hyperhuman intelligent quickly (one big genie), or
spread the secret of singleton resisting AI as far and as fast as
possible (many small genies with many competing goals). The
non-singleton model relies on vast intelligence magnification not
being very likely. Eliezer has written voluminously on the singleton
subject, and opencog is Not The Way To Go, for this path.

Singleton resistant AI would have to be similar to human in that one
process within it couldn't access all its code and its code (analogous
to rewiring during neural plasticity) would change over time. Making
it as unlikely to explode as humans, in fact as soon as it understood
some of its code, its code would change as understanding would be of a
procedural (literally!) nature.

Each AI would have a different goal made by a different person, some
might be friendly, other likely not (militarily developed), most might
be slaved to humans (or as prefer to think of them as exogenous brain
prostheses). You would have to try and steer the maximum amount of
compute power so that the end result is somewhat palatable.

Far and fast is my argument so that any one persons mistake is
unlikely to stomp on the rest of the people. Democracy and the markets
seem to be the least bad types of power structures we have made.

So I'd argue for less time spent philosophizing about the license,
more time spent actually making the damn thing work, and getting the
word out to spread the tech to people you want to have it. Oh yeah and
making sure the design cannot be easily botnetted by a big, bad
non-morally constrained one.

  Will Pearson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT