From: Brian Atkins (brian@posthuman.com)
Date: Thu Apr 26 2001 - 17:19:47 MDT
Ben Goertzel wrote:
>
> > O.k., but I'm not "a skeptic". I could be convinced by
> > considerably less than
> > a complete working system -- but also by significantly more than
> > you've said.
> > (That's not a criticism.)
> >
>
> Understood. There's a 370-page book that describes how our system works,
> and it's currently available only to people who need to see it, and under
> NDA....
>
Obviously this is a critical issue for both SIAI and Webmind. Personally
I was convinced through a combination of three things:
1. Humanity seems to be in an unstable state now, and will eventually attract
to either: a) killing itself off, b) achieving some sort of technology-
stable-state post-Singularity. If you accept this, then obviously speeding up
B is better than waiting around for A to possibly occur.
2. Becoming convinced that either the Singularity is going to automatically
be a good thing for humanity, OR if it could go either way that whoever gets
there first will determine which way it goes. Sitting by and watching it
happen (ESPECIALLY if you have spare cash that you could use to influence
it) is not a rational position if you believe there are risks involved in
how the Singularity turns out.
3. There is a concrete path to influencing the Singularity: fund an org that
creates a real AI. In fact, one of the nice (or really bad, depending on your
viewpoint) things about the Singularity is that as we get closer to it, it
takes less and less funds to be able to have a super-large impact. Never
before in history has there been an opportunity for such a huge return on
investment :-)
I don't see how anyone that fully internalizes those three items can come
to any other conclusions regarding the use of any totally unused funds they
may have sitting around. The hard part is getting potential funders to fully
internalize/accept these things... some people don't believe a Singularity
can occur, others think it will be bad, and others think that AI is too hard.
It certainly requires some optimism... SIAI at least is doing its best to
attack #2 with our Friendly AI document.
As for Webmind you are in a different zone- you are trying to get investors
to fund it based on giving them an economic return later on. That approach
might actually be harder, except for the fact that you guys already have
some good demo applications, working code, and even some revenue-generating
clients. I'm surprised no one so far has been willing to revive it!
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT