From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Oct 22 2004 - 15:20:09 MDT
Along with the freedom to question authority, must come the realization
that authority is sometimes busy (I'm moving to a new apartment within
Atlanta within the next week) and may not be able to answer *all* of your
questions, so pick a criterion that says it's okay to just answer some.
"Why not invent a business, make enormous loads of money like everyone who
founds a technology startup, and use that to fund the AI effort?"
I used to think that way. In 1998, when I published "Coding a Transhuman
AI"... it feels like centuries ago... Brian Atkins wrote me and said, d'you
suppose this is something that a small team of programmers could do? And I
said, nah, the Singularity is thirty years off, wanna start a business
instead? Then we'd have enough money to launch the planet-sized Manhattan
project it would take to shorten the Singularity ETA by a few years. And
Brian Atkins said, sure. It was the summer of the dot-com era. So I came
up with a bright idea I'd never seen done before, and spent the next six
months studying venture capital, marketing, markets, IPOs, corporate
structures, and writing a business plan. I can't remember whether it was
Brian Atkins or me who first realized it wasn't going to work, but the
realization was nearly simultaneous.
Developing a new technology and launching a corporation is really, really,
really, really, really, really, really, really, really, really, really,
really, hard.
If you start a company, it's your life. You don't do it on the way to
something else.
No matter how hard you try, you can't possibly have more than a fifty
percent chance of success. A realistic number would be more like ten
percent for technology companies. Everyone believes they're going to beat
the odds. At that point in my life, at the age of eighteen, I hadn't read
my Tversky and Kahneman and I didn't know how absurdly optimistic people
who start businesses are (experimental result, please note). If I'd known
in 1998 I wouldn't even have dreamed of trying. Well, maybe I would, with
the Singularity comfortably ensconced thirty years in the future.
It was hard to make the mental shift to planning to work for a nonprofit.
I had dreams, fantasies of fantastic wealth. I let go. It had been a
child's thought.
If you insert the attempt to start a new technology company into the set of
chained probabilities leading up to a successful Singularity, it has to
reduce the final probability of success by at least three-quarters.
Probably more. The more innovative the technology, the newer the market,
the less the probability of success.
In the very unlikely event you succeed at the business and are not shot
down by any of the dozen different causes of failure, the business becomes
your life. "Life is what happens to us while we are making other plans."
If you want to accomplish your plans, you have to stick to them, and not
get sidetracked by life. This is a truth harsh enough that most people
aren't willing to face up to it, and so their life happens to them while
they are making other plans.
And even if you succeed, you lose years. I don't think the human species
has that kind of time. If we win, and we may not win, it's going to be
close. We need the difficult cure before the easy disease. Add a delay of
six years (and probably more) to start a corporation, assume the unlikely
event of success, and by the time we were done it would probably be just
too late. Also, *I* don't have time. I have to do my thinking while I'm
still young enough to think. I have to use my annus mirabilis years on AI
and nothing else. Youth is a non-renewable resource for solving scientific
problems. I still live in fear of running out of youth before I run out of
fundamental AI problems requiring basic shifts in thinking and deep mental
retraining.
Oh, and fifty-five million humans die forever every year. It's a piece of
knowledge that sinks its way into you very slowly, because it's so hard to
comprehend. But I understand more as time goes on. I'll try and stay sane
(well, reasonably sane) regardless, which is the most critical part of this
job and the part that most people would instantly flub.
It might make sense for other SIAI supporters to try and start companies
with the goal of funding the Singularity Institute if they succeed. Some
of them are. It does not make sense for me to do so, unless I am willing
to entirely sacrifice my potential as an FAI researcher.
There is only one thing that I can try to accomplish with my life and it is
solving the FAI problem. Try to do anything else, and I throw away my
chance, assuming I have one.
But mostly, I think people making the suggestion don't know how unlikely is
a new company to succeed, how much energy it takes from a person, the cold
statistics of technology startup survival rates, and the wry psychology
experiments showing that everyone thinks they can beat the statistics
because their plan is different.
As for the comment about monks holding out begging bowls... either this is
a suggestion that all charities stay small and accomplish nothing, which is
false-to-fact, or it is a suggestion that begging is beneath a proud
Earth's defender. In which latter case the speaker does not begin to
comprehend the concept of "existential risk". Pride counts for nothing.
It vanishes like a snowflake in the ocean, a feather buried under the
weight of lives. Yes, it hurt to ask for help, just like it hurt to give
up my dreams of being a millionaire. I did it anyway because people were
dying. I suppose that sounds hokey to some people, like I'm making a big
deal over nothing and feeling too much. Fine. Maybe someday you'll
understand that it was all real, or maybe you'll die before then. Maybe
you'll die one day before the bus arrives. If so, I'd like to apologize
now for not working just a little bit faster.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT