Re: Convincing wealthy benefactors to back real AI research

From: Eliezer S. Yudkowsky (
Date: Thu Apr 26 2001 - 23:04:50 MDT

Ben Goertzel wrote:
> Hi Brian,
> Your points are very well taken
> I am seriously considering raising funds for 2 separate entities
> -- a nonprofit org dedicated to creating "real AI" by continuing the work on
> the Webmind AI Engine

Ben, I'd strongly advise against this. In part, this is because of basic
disagreements we have about Webmind's architecture. I don't think you can
create a Singularity as a nonprofit. I do think you can make enormous
amounts of money as a for-profit. I can see a day when "Webmind" means
"AI" the same way that Microsoft means software or GE means lightbulbs. I
would really like to see you taking Webmind public and making a huge
amount of money, because then I can hit you up for funding for the
Singularity Institute. But Webmind *isn't* advanced enough to build a
Transition Guide in the basement - it's just advanced enough to make a ton
of money.

If you turn Webmind into a nonprofit - if you turn the Webmind AI Engine
into a nonprofit - then I don't see where the ton of money comes in. I
realize you have unlimited faith in the ability of lawyers to diddle the
System, but you'll still be seriously limiting your total ability to
profit by giving the core IP to a nonprofit, because the system is set up
so that once the property enters the nonprofit universe, that property is
forever after used in a way consonant with the public benefit. I can
easily see, say, Microsoft suing for its heinous self-dealing
in licensing its software only to Once the government makes
you tax-exempt for the public benefit, they have an almost unlimited right
to demand that you act for the public benefit. Selling a product can
sometimes be a public benefit, but selling a product for whatever the
market can bear is not a public benefit. There have already been, in
recent times, legal complaints about nonprofit corporations that are
behaving too much like real corporations. There are features of the
System intended to prevent, for example, a corporation making all its R&D
work tax-deductible while retaining sole control of it. The IRS examiner
will *ask*. (They asked *us*.)

Now, maybe you can get away with diddling the System as long as only one
side has expensive lawyers, but I don't think you can do it if another
company, like Microsoft, opposes you with *their* own expensive lawyers.
Or maybe I'm being naive and the whole thing is a sham intended to impress
a gullible public. Feel free to tell me, if so.

You also can't make nearly as much money taking a company public if a
nonprofit is licensing you all your IP, and you certainly can't become a
Microsoft. The stock-market investors will notice. Right?

> I think I can get $$ for this, in a modest amount sufficient to support,
> say, 12 guys in Brazil and 3 in the US. This should be enough to get the
> system finished within a couple years.
> I note that some of the investors I think I can get, may be too mentally
> conservative to believe in the Singularity, but may still believe that
> "real AI" research is a cool thing and should be funded
> -- a for-profit company focusing on the existing and proven technology
> components leveraging particular technologies from within the AI ENgine.
> This company of course will use further results as they come out of the real
> AI research group, under some appropriate legal arrangement, but won't fund
> far-reaching AI Dev in itself
> I'm suspecting that this bifurcation will make fundraising easier, as
> investors rightly like to see a tight focus in the organizations they invest
> in.

I think bifurcating would totally blow Webmind's potential to become the
next Microsoft. In the case of the Singularity Institute, we're a
nonprofit because we *are* in this for the Singularity. I at first
thought of using a dual corporate structure like the one you describe, but
then, on reconsidering, decided I wasn't even sure that I wanted to
release interim versions of the AI for use in data-mining and so on. And,
and I emphasize this, if the Singularity Institute *did* decide that some
product was beneficial to civilization and that it would be a good idea to
sell it, or fork off a for-profit to sell it, we don't *need* to become
the next Microsoft. It's not our mission in life. We could make a
*modest* profit, as much as we need to go on ticking, and no more.

Webmind has the potential to become the next Microsoft; furthermore, in my
humble evaluation, AI Engines present no threat to civilization. And -
unless I miss my guess - you, Ben Goertzel, want to be the next Bill
Gates. It matters to you. Now, I might someday make a comfortable living
as an AI programmer at SIAI, maybe even be on the Board of Directors or
consultant to some spinoff company that goes public, and make a couple mil
off my half-percent of the shares in the IPO, but anything above a few
million dollars would be totally unnecessary to reaching the Singularity.
I decided that when I decided to go the nonprofit route, and it was an
emotional wrench. Because previously I had, in the back of my mind,
retained some hope of being the next Bill Gates. I had to deliberately
say, "This isn't necessary to the Singularity. This is just me wanting to
play hero." I would now, of course, phrase it as being "The human bias
towards context-insensitive personal power at the expense of
context-sensitive altruistic power." A million dollars would be useful to
me personally, but if I have to get to the Singularity on an entirely
ordinary salary, I can do it. And I wouldn't mind.

Unless your goal system has seriously changed in the last month, my
reading on you says that you believe in a balance between personal goals
and altruistic goals, rather than trying for total altruism. And I
respect that. The point I'm making is that, when I personally decided to
go the nonprofit route, I noticed that the decision was strictly dependent
on a very strong skew towards altruism. My bet is that you would tell me
that - in terms of emotional balance - I was being stupid and
ostentatiously self-sacrificing. Well, I disagre. But according to your
current goal system, as I understand it, it makes far more sense to take a
little extra risk to keep it a for-profit endeavor.

Just shift to talking to the investors about the humanistic and scientific
tragedy of letting Webmind die, and ask for enough funding to keep the
Brazilian team together, while explaining that you don't want to shift
entirely to the nonprofit sphere because you don't want to limit the
commercial potential of the project if you do succeed. Speaking as a
nonprofit guy who doesn't own any stock in Webmind, I think that the
benefit to humanity of Webmind lies in your becoming a megacorporation and
marketing and selling lots and lots of products, and that there wouldn't
be as much benefit to humanity if you gave your stuff away or sold it at

-- -- -- -- --
Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT