From: Ben Goertzel (ben@webmind.com)
Date: Fri Apr 13 2001 - 07:07:02 MDT
FWIW, here's my current tentative plan for open-sourcing parts of Webmind AI
development
First, open-sourcing all of the Webmind AI Engine code isn't an option for
us because of our investors. If it were, though, I imagine the result would
be this. A ~small~ in-group of people would become interested enough to
work on it, and this group would be the only ones who understood it. It
seems to me that a similar effect can be achieved by inviting a small
in-group of people to join a closed-source effort.
Second, having said this, we ~are~ seriously considering open-sourcing a lot
of our code. All but the most important parts! As it stands now, we have
some nice NL processing code, a lovely Java GP system, a distributed Java
agents system called the "psycore" or Mind Os (which is a couple hundred
thousand lines of Java), complete with a couple scripting languages, etc.
All these things are built to interact with each other and with the
"cognitive core" of the system, but they also have meaning on their own.
But the crux of the Webmind brain is contained in about 2000 lines of C code
which is the "cognitive core", embodying reasoning, association-finding,
associative memory, attention allocation, concept formation, and
cognitive/perceptual/active schema execution. Frankly, getting this sort of
thing to function well is pretty intense work, and seems best done by a
small, tightly-knit group of, say, 4-7 people.
Once it's achieved a high degree of functionality & intelligence, could it
benefit from a larger team? Sure. At that point there's a serious choice
between open-sourcing it and obtaining enough money to build a huge
proprietary research team around it.
About Linux, BTW, it obviously is less buggy than windows, but it's a
different kind of development than AI development. Linux is founded on
well-understood principles; it's engineering work. AI development requires
a very difficult and peculiar combination of conceptual, scientific and
engineering work, which is difficult to manage whether in a closed-source or
open-source situation. My guess is that until there is a well-functioning
AI system to seed the effort, managing such an effort in an open-source
situation would be close to impossible. The reason is that, although there
are a lot of people interested in AI, most of them have their own
ideosyncratic ideas about how to do it, and to get everyone's intuition
working in harmony in a globally distributed, everyone's-his-own-boss
setting, would require truly fantastic, perhaps superhuman visionary
leadership.
-- Ben
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Brian Atkins
> Sent: Friday, April 13, 2001 2:18 AM
> To: sl4@sysopmind.com
> Subject: Re: Open Source Friendly AI? (was Singularity and the
> generalpublic)
>
>
> James Higgins wrote:
> >
> > At 11:41 PM 4/12/2001 -0400, Decan McCullagh wrote:
> > >On Fri, Apr 06, 2001 at 08:33:09PM -0400, Eliezer S. Yudkowsky wrote:
> > >Secrecy, even discussions of it, will increase their fear and
> distrust of
> > >you. Your only option is to play a game of chess, where the
> moves are open,
> > >rather then poker.
> >
> > I quote the above merely because it gave me the spark to think
> about the below.
> >
> > Has there been any serious discussion about making this an open source
> > project? Instead of debating how open to be, if/when to hide,
> etc. maybe
> > you should consider the exact opposite. I believe it has many
> advantages.
>
> Yep we've thought about it
>
> >
> > 1) It becomes nearly impossible (definitely impractical) to
> stop the work
> > since everyone has access to it and could continue to build
> upon it if the
> > original authors could not continue.
>
> Maybe, but why bother unless you have to? Perhaps we'll set up a remote
> location with a copy of all our work to be released just in case the
> government cracks down (highly unlikely...) on us. Isn't it sad that we
> even have to worry about such stuff happening?
>
> >
> > 2) It would pull in some of the rogue groups who would go it alone.
>
> I don't see how this would help. By releasing our code and ideas
> we actually
> encourage/help along splinter groups. If we keep the code and ideas more
> private then they have to come and chat with us if they don't want to try
> to reinvent the wheel.
>
> >
> > 3) Open Source could massively speed up the process. Instead
> of having a
> > few coders working on it, thousands or more would be able to
> > contribute. (with very high quality control, of course)
>
> Actually open source has so far in general proven itself to be a
> much slower
> method of software development. And it gets worse on larger/more complex
> projects. How many hackers do you think will really be able to contribute
> much to an AI project? Many fewer than can contribute to something like
> Mozilla, and look how slow that thing has gone.
>
> >
> > 4) Probably the #1 biggest benefit is improved quality. Open Source in
> > many ways is the pinnacle of code reviews. Having so many
> ideas study the
> > source would reveal far more errors and problems than an isolated team
> > could ever accomplish.
>
> Actually I've seen that Linux has more bugs reported on Bugtraq than any
> other operating system. Is this because it is buggier, or because more
> stuff gets found? Perhaps someday Mozilla will become better than Internet
> Explorer in terms of stability, but for now the closed source approach
> using highly skilled programmers has worked better.
>
> >
> > 5) Providing a common, open source Friendly AI system would allow other
> > groups who insist on pursuing this themselves to incorporate
> your friendly
> > tech.
>
> Or allow Saddam Hussein to get his evil AI up and running that
> much faster.
> To be realistic, real AI is an extremely powerful technology, and our view
> is to not hand it over to people we don't know and trust.
>
> >
> > If your ultimate goal really is to get to the singularity as soon as
> > possible, before a non-friendly singularity can occur I think this is an
> > ideal path to follow.
>
> We disagree, and would only pursue such a pathway as a last resort.
> --
> Brian Atkins
> Director, Singularity Institute for Artificial Intelligence
> http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT