Open Source Friendly AI? (was Singularity and the general public)

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Apr 12 2001 - 23:48:51 MDT


At 11:41 PM 4/12/2001 -0400, Decan McCullagh wrote:
>On Fri, Apr 06, 2001 at 08:33:09PM -0400, Eliezer S. Yudkowsky wrote:
>Secrecy, even discussions of it, will increase their fear and distrust of
>you. Your only option is to play a game of chess, where the moves are open,
>rather then poker.

I quote the above merely because it gave me the spark to think about the below.

Has there been any serious discussion about making this an open source
project? Instead of debating how open to be, if/when to hide, etc. maybe
you should consider the exact opposite. I believe it has many advantages.

1) It becomes nearly impossible (definitely impractical) to stop the work
since everyone has access to it and could continue to build upon it if the
original authors could not continue.

2) It would pull in some of the rogue groups who would go it alone.

3) Open Source could massively speed up the process. Instead of having a
few coders working on it, thousands or more would be able to
contribute. (with very high quality control, of course)

4) Probably the #1 biggest benefit is improved quality. Open Source in
many ways is the pinnacle of code reviews. Having so many ideas study the
source would reveal far more errors and problems than an isolated team
could ever accomplish.

5) Providing a common, open source Friendly AI system would allow other
groups who insist on pursuing this themselves to incorporate your friendly
tech.

If your ultimate goal really is to get to the singularity as soon as
possible, before a non-friendly singularity can occur I think this is an
ideal path to follow.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT