RE: Convincing wealthy benefactors to back real AI research

From: Ben Goertzel (
Date: Fri Apr 27 2001 - 08:13:56 MDT

Hi Eliezer,

Obviously, your perception of the strengths of myself and the Webmind AI
Engine, is very different from my perception of these things

Firstly: I believe the Webmind AI architecture is adequate to lead to the
Singularity. You don't. That's fine. For one thing, you don't understand
the AI architecture because it's never been explained to you in detail. For
another thing, this is a topic on which differences of opinion are naturally
going to occur.

Let's suppose I gave you enough info to let you study the AI Engine design
in detail -- which would take you a couple months. After that, there would
be a couple-months period of question-answering and arguing, and ~then~,
after all that, I'd be really interested to know your evaluation of the
Singularity potential of the system. Before that, your opinion isn't
adequately informed to be valuable to me. I don't say this to fault you;
since you don't know how the system works, you of course are forming your
best opinion based on the information available to you.

Secondly: The notion that Webmind Inc. has a good chance of becoming the
next M$ is naive. I am not a great businessman. I am a scientist,
philosopher and writer (and weekend musician ;). Through 3 years in
business I have begun to understand what qualities make a great businessman.
I believe I could help build a great business team, my second time around --
by recruiting people who ~are~ great businessmen to run the business, and
restricting myself to a combination evangelist/chief-scientist role.

But building a great business like M$ is ~hard~, and taking on this goal at
the same time as the goal of building real AI is perhaps foolish. One thing
I've learned about business is that in business as in science and
engineering ~focus~ is important. Having dual goals of building real AI and
making money in the same organization is confusing; one is trying to solve 2
hard problems at once. At very least, I feel that, going forward, the two
goals need to be somehow separated. There can be a business that seeks to
leverage the real AI for the purpose of making profit, but the business also
needs to be free to use less-than-real-AI technology if this seems to be the
best way to make money.

This is my motivation for thinking about there being 2 different Webmind
Inc. successors (aside from the Market Predictor hedge fund successor that's
being posited, but isn't relevant to this discussion):

-- a nonprofit focused on making real AI

-- a for-profit company focused on bringing AI to various niche markets,
using a combination of whatever techniques (advances or simple) seem to best
meet market needs

In this way each organization has a clear focus.

If the WM design has its shortcomings, we will discover them through ongoing
R&D. I believe the design is in the right direction and won't need a total
overhaul, but only some tweaks. The kind of R&D required to get real AI to
work is in many ways different from the kind of R&D required to make the AI
Engine useful in current commercial products -- another life lesson of the
last couple years.

If the AI Engine is to be a commercial project, it needs to be in a
big-company research lab setting. But the big companies with AI research
labs appear to be too conservative at the moment to take such a project on.
In the current market, there is no WAY to raise $$ for a start-up with a
substantial "pure R&D" function. That I managed to create such a start-up
and maintain it for a few years was a wonderful thing, but was a consequence
of the INternet bubble which is gone. With more business savvy and luck we
might have parlayed our lucky start into an ongoing business with enough
profitability to support an ongoing real AI R&D group, but, well, we blew
that chance and now have to move on, and craft a new future consistent with
the current market situation and our position within it.

-- Ben

> -----Original Message-----
> From: []On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Friday, April 27, 2001 1:05 AM
> To:
> Subject: Re: Convincing wealthy benefactors to back real AI research
> Ben Goertzel wrote:
> >
> > Hi Brian,
> >
> > Your points are very well taken
> >
> > I am seriously considering raising funds for 2 separate entities
> >
> > -- a nonprofit org dedicated to creating "real AI" by
> continuing the work on
> > the Webmind AI Engine
> Ben, I'd strongly advise against this. In part, this is because of basic
> disagreements we have about Webmind's architecture. I don't think you can
> create a Singularity as a nonprofit. I do think you can make enormous
> amounts of money as a for-profit. I can see a day when "Webmind" means
> "AI" the same way that Microsoft means software or GE means lightbulbs. I
> would really like to see you taking Webmind public and making a huge
> amount of money, because then I can hit you up for funding for the
> Singularity Institute. But Webmind *isn't* advanced enough to build a
> Transition Guide in the basement - it's just advanced enough to make a ton
> of money.
> If you turn Webmind into a nonprofit - if you turn the Webmind AI Engine
> into a nonprofit - then I don't see where the ton of money comes in. I
> realize you have unlimited faith in the ability of lawyers to diddle the
> System, but you'll still be seriously limiting your total ability to
> profit by giving the core IP to a nonprofit, because the system is set up
> so that once the property enters the nonprofit universe, that property is
> forever after used in a way consonant with the public benefit. I can
> easily see, say, Microsoft suing for its heinous self-dealing
> in licensing its software only to Once the government makes
> you tax-exempt for the public benefit, they have an almost unlimited right
> to demand that you act for the public benefit. Selling a product can
> sometimes be a public benefit, but selling a product for whatever the
> market can bear is not a public benefit. There have already been, in
> recent times, legal complaints about nonprofit corporations that are
> behaving too much like real corporations. There are features of the
> System intended to prevent, for example, a corporation making all its R&D
> work tax-deductible while retaining sole control of it. The IRS examiner
> will *ask*. (They asked *us*.)
> Now, maybe you can get away with diddling the System as long as only one
> side has expensive lawyers, but I don't think you can do it if another
> company, like Microsoft, opposes you with *their* own expensive lawyers.
> Or maybe I'm being naive and the whole thing is a sham intended to impress
> a gullible public. Feel free to tell me, if so.
> You also can't make nearly as much money taking a company public if a
> nonprofit is licensing you all your IP, and you certainly can't become a
> Microsoft. The stock-market investors will notice. Right?
> > I think I can get $$ for this, in a modest amount sufficient to support,
> > say, 12 guys in Brazil and 3 in the US. This should be enough to get the
> > system finished within a couple years.
> >
> > I note that some of the investors I think I can get, may be too mentally
> > conservative to believe in the Singularity, but may still believe that
> > "real AI" research is a cool thing and should be funded
> >
> > -- a for-profit company focusing on the existing and proven technology
> > components leveraging particular technologies from within the AI ENgine.
> > This company of course will use further results as they come
> out of the real
> > AI research group, under some appropriate legal arrangement,
> but won't fund
> > far-reaching AI Dev in itself
> >
> > I'm suspecting that this bifurcation will make fundraising easier, as
> > investors rightly like to see a tight focus in the
> organizations they invest
> > in.
> I think bifurcating would totally blow Webmind's potential to become the
> next Microsoft. In the case of the Singularity Institute, we're a
> nonprofit because we *are* in this for the Singularity. I at first
> thought of using a dual corporate structure like the one you describe, but
> then, on reconsidering, decided I wasn't even sure that I wanted to
> release interim versions of the AI for use in data-mining and so on. And,
> and I emphasize this, if the Singularity Institute *did* decide that some
> product was beneficial to civilization and that it would be a good idea to
> sell it, or fork off a for-profit to sell it, we don't *need* to become
> the next Microsoft. It's not our mission in life. We could make a
> *modest* profit, as much as we need to go on ticking, and no more.
> Webmind has the potential to become the next Microsoft; furthermore, in my
> humble evaluation, AI Engines present no threat to civilization. And -
> unless I miss my guess - you, Ben Goertzel, want to be the next Bill
> Gates. It matters to you. Now, I might someday make a comfortable living
> as an AI programmer at SIAI, maybe even be on the Board of Directors or
> consultant to some spinoff company that goes public, and make a couple mil
> off my half-percent of the shares in the IPO, but anything above a few
> million dollars would be totally unnecessary to reaching the Singularity.
> I decided that when I decided to go the nonprofit route, and it was an
> emotional wrench. Because previously I had, in the back of my mind,
> retained some hope of being the next Bill Gates. I had to deliberately
> say, "This isn't necessary to the Singularity. This is just me wanting to
> play hero." I would now, of course, phrase it as being "The human bias
> towards context-insensitive personal power at the expense of
> context-sensitive altruistic power." A million dollars would be useful to
> me personally, but if I have to get to the Singularity on an entirely
> ordinary salary, I can do it. And I wouldn't mind.
> Unless your goal system has seriously changed in the last month, my
> reading on you says that you believe in a balance between personal goals
> and altruistic goals, rather than trying for total altruism. And I
> respect that. The point I'm making is that, when I personally decided to
> go the nonprofit route, I noticed that the decision was strictly dependent
> on a very strong skew towards altruism. My bet is that you would tell me
> that - in terms of emotional balance - I was being stupid and
> ostentatiously self-sacrificing. Well, I disagre. But according to your
> current goal system, as I understand it, it makes far more sense to take a
> little extra risk to keep it a for-profit endeavor.
> Just shift to talking to the investors about the humanistic and scientific
> tragedy of letting Webmind die, and ask for enough funding to keep the
> Brazilian team together, while explaining that you don't want to shift
> entirely to the nonprofit sphere because you don't want to limit the
> commercial potential of the project if you do succeed. Speaking as a
> nonprofit guy who doesn't own any stock in Webmind, I think that the
> benefit to humanity of Webmind lies in your becoming a megacorporation and
> marketing and selling lots and lots of products, and that there wouldn't
> be as much benefit to humanity if you gave your stuff away or sold it at
> breakeven.
> -- -- -- -- --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT