From: Ben Goertzel (ben@intelligenesis.net)
Date: Sat Sep 30 2000 - 20:02:39 MDT
My own view is that AI doesn't have that much to gain from the open-source
philosophy anyway -- not at
this stage
Not unless it can be embedded in something that's of appeal to a massive
variety of people
Open source seems to be very successful with things that have a fairly
generic user base, such as
an OS, or a game.
Open source AI projects that I know of, such as the WEKA machine learning
library, don't
really gain much from being open source, in practice. Their open-sourceness
is more a philosophical
statement than a practical tool. Pretty much the only people who ever
contribute to
Weka are its creators and their close friends. The amount of specialized
knowledge required to
contribute usefully to Weka is fairly great, and, for whatever reason, most
people who need to use machine
learning in practice don't have this knowledge; whereas most people doing
machine learning research
prefer to use their own systems rather than Weka (in my view this latter
group is largely misguided
since Weka provides a very nice framework for doing cross-validation
testing of machine learning
algorithms, but I'm just reporting observed facts, not lauding them).
Later, once a real AI has been demonstrated, there will be great value to
making portions of it, or all
of it, open source. Because there will be wide interest -- people will take
the time to learn enough to
contribute meaningfully. The collective minds of the hackers of the world,
combined with the collective
mind of various instantiations of the software itself, will give Ai a huge
boost beyond what the initial
group of engineers and scientists have done.
-- ben goertzel
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Michael LaTorra
> Sent: Saturday, September 30, 2000 9:48 PM
> To: sl4@sysopmind.com
> Subject: RE: About that E-mail:...
>
>
> I think Ben is correct that no one will take serious action against
> developers of AI unless or until they seem to be achieving success.
>
> All the more reason for being circumspect about what you may have achieved
> until it is too late for anyone else to stop.
>
> No open source code on this project!
>
> Regards,
> Michael LaTorra
> mike99@lascruces.com
>
>
>
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Ben Goertzel
> Sent: Saturday, September 30, 2000 7:36 PM
> To: sl4@sysopmind.com
> Subject: RE: About that E-mail:...
>
>
>
> I really doubt this is true... I don't think that anyone will be
> hunted down
> or otherwise
> harassed until demonstrable superhuman intelligence has been ACHIEVED
>
> For instance, I've had a start-up company devoted to creating superhuman
> intelligence for
> 3 years now, and no one has harassed me, because NO ONE BELIEVES WE CAN
> REALLY DO IT...
>
> Once the thinking machine is demonstrated -- ~then~ we'll have to start to
> worry...
>
> ben
>
> > -----Original Message-----
> > From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> > Of Josh Yotty
> > Sent: Saturday, September 30, 2000 8:05 PM
> > To: Sl4
> > Subject: About that E-mail:...
> >
> >
> > I'm willing to bet the people working toward superhuman
> > intelligence will be hunted down. Of course, the people hunting
> > us down will be irrational, ignorant, narrowminded and stupid. If
> > I remember correctly, less than ten percent of the world's
> > population can be classified as rational. (This is temperament.
> > Check out what I mean at http://www.keirsey.com)
> >
> > Images of the Salem witch trials come to mind. We probably will
> > not be safe. People will automatically think that we are trying to:
> >
> > A) Take over the world.
> > B) "Purify" the world by killing most of humanity
> > C) Any other stupid reason you can think of.
> >
> > Humanity, as a whole, is stupid. America is made up of media
> > zombies who act on gossip and whims, and rumors. Having an
> > original and true thought really hurts. In fact, Bill Joy's "Why
> > the future doesn't need us" on Wired had a writing from Kazynski
> > (I know that's not how it is spelled; you know, the Unabomber)
> > that stated, basically, superhuman intelligence would either
> > destroy us all or take away all meaning from life.
> >
> > Well, anyway, we might have to move to another country or
> > gradually introduce the concept to other people or not inform the
> > general public (either not saying anything or hiding it behind
> > lots of technospeak, computer jargon, and large-word gobbledygook).
> >
> > What do you think?
> > Josh Yotty
> > | Orion Digital |
> > oriondigital@techie.com
> > http://www.crosswinds.net/~oriondigital/
> > ______________________________________________
> > FREE Personalized Email at Mail.com
> > Sign up at http://www.mail.com/?sr=signup
> >
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT