From: Eliezer Yudkowsky (email@example.com)
Date: Wed Dec 29 2004 - 16:27:43 MST
justin corwin wrote:
> What these people are really asking, via the agency of hypothetical
> investors, is "what is AI good for, and how can that be delivered from
> your research."
One of the most subtle and pernicious biases against rationality - one that
I struggle with myself - is the tendency for your answers to change
depending on the questions people ask you. The more you have to argue
against some particular question people bug you with, the more your
intellectual focus shifts to that question - even if it's the wrong
question. Would I have been so mistaken earlier in my life about the
inevitable benevolence of SIs, if people on the Extropians list hadn't
bugged me about it? Yes, I would have. But I suspect I wouldn't have been
so mistaken about that question, if earlier in my life, my parents and a
number of annoying other people hadn't repeated to me over and over the
cliche wisdom that intelligence has nothing to do with morality or
benevolence or emotional maturity. Which is not, in fact, how things work
in humans. From defending against that earlier fallacy, I set myself up to
make the later mistake in the case of AIs.
All those relatives and venture capitalists you so desperately want to
convince, will always ask you the wrong questions, demand you to defend
points that are not the most important points. In this case, what matters
is the survival of humane life. This requires that someone build a
Friendly AI (difficult problem) before someone hacks together a
self-improving thing that goes FOOM (not an easy problem, but easier than
FAI). Venture capitalists won't ask you about that. But you'll argue with
venture capitalists, and develop all sorts of clever arguments for
persuading them (and yourself), and your focus will change to proving that
AI *really can* be commercially viable, and you'll lose track of the one
most important thing.
> Artificial Intelligence, even very weakly achieved, is not just
> another technology. It represents, at the very least, a complete
> industry, and most likely, is one of those events that redefines the
> landscape of human activity.
I like the "most likely" qualification you place on that. It certainly
helps maintain the image of your rationality if you try to make so
startling a case to venture capitalists - it sounds like you're questioning
yourself, even though you're not. But if something is "most likely" true,
you have to plan for it; to plan for it, you have to accept it as a
starting point and not be embarassed to think about it or talk about it.
Otherwise you'll flinch away from the crazy-sounding true thoughts and
stick with the moderate, publicly acceptable, false thoughts.
If conditional probabilities, p(X|A), p(Y|A), p(Z|A), all depend on A; then
to *plan* for the "most likely" eventuality of A, we have to work with
those conditional probabilities, and not be continually distracted by talk
of "but of course, all this talk of X assumes that A holds". Suppose A
holds. It can happen, you know, you even called it "most likely". If so,
then you can't factor the doubt of A independently into your plans for X,
Y, Z that follow from A. When you're developing your conditional plan for
dealing with A, you have to take A as an accomplished fact so that you
don't think that P(X|A) is necessarily at least as uncertain as P(A).
When we plan for that "most likely" eventuality, let us take for granted
that AI has an impact large enough to rewrite the solar system in arbitrary
ways conforming to the (actual) laws of physics. Let us not repeat our
uncertainty on this point over and over when we are developing contingency
plans for dealing with that "most likely" outcome. We take that
uncertainty into account once, at the beginning, when we weight the
importance of the contingency plan; and that you already called "most
likely". You may sound crazier to venture capitalists, VCs to whom the
point is still shocking, if you don't repeat your uncertainty over and over
again to reassure them you're not a cultist. But if you want to follow
Bayesian decision theory - if you want to arrive at a rational plan - you
can't factor the uncertainty of A multiple times into plans that depend on
P(X|A), P(Y|A), and P(Z|A).
What I'm trying to avoid here is the tendency of people to say: "Well,
suppose that this coin is 90% biased towards heads. In that contingency,
should we bet on the next three flips coming up HHH? It might seem like we
should, but consider: On the first round, the coin is very likely to come
up H, but only if the coin is indeed biased, which is very uncertain. Then
on the second round, it might come up H again, but again, this depends on
the coin being biased. And the third round has the same objection. So we
can see that HHH is actually an exceedingly unlikely outcome, even though
I've said it's possible and even probable that the coin is biased."
I sometimes look on people's thoughts and think that they have only a
limited supply of daring. Once an AI academic uses up all their daring on
suggesting that human-equivalent intelligence is possible in the next
thirty years, they can't consider further daring thoughts like transhuman
intelligence in the next 20 years, recursive self-improvement in the next
10 years, FAI being more difficult than AI, or the human species being in
imminent danger of extinction. They've used up their daring and won't be
able to say anything future-shocky until they refill their daring tank at a
To arrive at the correct answer means arriving at some theory that runs on
its own rails to a prediction, regardless of what the prediction "sounds
like". Not all theories with this property are correct, but all correct
theories have this property.
> Any transhuman intelligence, of course, represents an absolute
> departure from human prediction,
I used to go around saying that, but that was when intelligence was a
sacred mystery to me. If I set an FAI in motion, it will be because I have
made some kind of very strong prediction about the FAI's effects - even if
that prediction is abstract, it will be a prediction; it will constrain the
space of outcomes.
> but for the time being, let us speak of what we can.
> The unfortunate thing, from my point of view, is that generating a
> conservative estimation of the economic impact of AI is nearly
> It pre-supposes several things.
> -First, that AI impacts economically before it changes the entire
> landscape, this seems quite possible, AI will take some time to
> develop, and even once complete will require some time to run. Even if
> it's just inflation hitting the roof as everyone with any money does
> whatever they think will avert the apocalypse during the last week of
> the Final Program, that counts as economic impact.
Sounds like a catastrophic scenario that it becomes your responsibility to
avoid, if you fulfill your dream of having a say in the matter. Besides,
the Final Week isn't the same as the five-year runup or twenty-year gradual
economic development that people like to fantasize about.
> -Second, that there is some period of stability in the development of
> AI that allows for AI 'products' to be evaluated in terms of
> relatively cognizant economic terms. This is very tricky. It has been
> popularly supposed by some that human-commensurate intelligence
> represents the top level, or a hard barrier, that AI research will
> continue to that point and then stop, or at least be slowed. It is
> likely that a certain level of intelligence represents the maximum
> effective potential of a particular design, due to scaling laws,
> architectural support requirements, or flaws in the design to start
> with. Unfortunately, an AI will not be using the same design as a
> human. It is, in my estimation, just as likely to top out at the
> commensurate intelligence of a mouse, or a dolphin, or so far above us
> that the intelligence is not measurable. It seems clear to me that
> minds need not follow a uniform plan with uniform strengths, although
> they may be very correlated. This makes design-independent analysis
Okay, this follows LOGI so far, but...
> Some hope in the form of computer power requirements, assuming
> biologicals and previous experience with unintelligent mechanical
> computation hold, the physical task of running an intelligence may
> limit it to certain levels of potential until larger/faster computers
> can be built. Unfortunately even Kurzweil's rather charming little
(What about Ilkka Tuomi's countergraphs?)
> give us little time before available computation far outstrips
> human level, leaving us in the same boat. The stability given us there
> is fleeting, but does allow enough years to be evaluated on the
> economic scale.
And here we see the start of what leapt out at me as the chief mistake in
the whole analysis - ignoring the possibility of recursive
self-improvement. If AI scales nicely and neatly with the speed of
human-produced computing power, then you can have a nice little decade when
AI makes money. If the AI reaches an infrahuman threshold level where RSI
becomes tractable and then immediately goes FOOM, this decade does not
exist. If the AI reaches a threshold level and absorbs the Internet, it
goes FOOM. If the AI reaches a threshold level and makes a billion dollars
and the directors of your company decide to buy it a shiny new
supercomputer, it goes FOOM.
If you don't point this out to venture capitalists, they won't point it out
for you. You can fool venture capitalists, if that is your wish.
But you wrote a whole essay about the "economic potential" of AI, and you
didn't say anything about recursive self-improvement. That's a pretty
severe omission. This is the blindness that comes of turning your focus of
attention to persuading venture capitalists.
> -Third, that our status, as AI researchers and developers, will give
> us a privileged and controllable stake in the construction and
> deployment of AI products and resources, allowing us to capitalize on
> our investment, as per the standard industrial research model. This
> seems fairly safe, until one realizes that there are many forces that
> oppose such status, merely because of the nature of AI. Governments
> may not allow technology of this kind to remain concentrated in the
> hands of private corporations. AI may follow the same path as other
> technologies, with many parallel breakthroughs at the same time,
> leaving us as merely members of a population of AI projects suddenly
> getting results. The information nature of this development increases
> this problem a great deal. I have no reason to imagine that AI
> development requires specialized hardware, or is impossible to employ
> without the experience gained in the research of said AI software. So
> piracy, industrial espionage, and simple reverse-engineering may
> render our position very tenuous indeed.
"Tenuous" is an interesting word for the uncontrolled proliferation of a
technology easily capable of wiping out the world. Or worse, but that
still seems to me unlikely - one of the technology thiefs would have to
master true FAI techniques for that.
> I have no easy answers for this assumption,
I don't intend to build technology that is user-friendly enough to be
stolen, or maybe "kidnapped" would be a better word. An FAI build solely
for one purpose will not easily be turned to another. If I steal Tolkien's
original manuscript of _Lord of the Rings_, it doesn't mean that I can flip
a little built-in switch to make a Gray Lensman the hero instead of Frodo -
not unless I'm a good enough author to rewrite the novel from scratch.
Though the FAI-nappers might still be able to blow up the world.
> save that while worrying, little evidence exists
> either way. I personally believe that our position is privileged and
> will remain so until the formation of other AI projects with
> commensurate theory, developed technology, and talent, at that point
> it becomes more problematic.
> Assuming we have answers to all these questions, we may find that AI
> is indeed a good way to make money, or at least in the near term.
Congratulations on persuading the venture capitalists you might make money!
What about the survival of the human species? Is that being considered?
Whoops, too late! It's too late to introduce that consideration into
your essay. It can't be pasted on afterward. If you were going to care,
the time to care was right at the beginning, before you knew what your plan
would be. Now all you can do is rationalize a consideration that wasn't
there when the actual plan was devised. Unless you actually change your
plan in some way to accomodate the new requirement, but I've never seen any
other AI researcher do that so I'm not spending much time hoping for it.
> I have a story I can tell here, but the supporting evidence is
> abstract, and indirect. Artificial Intelligence is likely, in my
> opinion, to follow an accelerating series of plateaus of development,
> starting with the low animal intelligence which is the focus of our
> research now. Progress will be slow, and spin off products limited in
> their scope. As intelligence increases, the more significant
> bottleneck will be trainability and transfer of learned content
> between AIs. This period represents the most fruitful opportunity for
> standard economic gain. The AI technology at this point will create
> three divisions across most industry, in terms of decision technology.
> You will have tasks that require human decision-making, tasks that can
> be fully mechanized, performed by standard programmatic
> approaches(normal coding, specialized hardware, special purpose
> products), and a new category, AI decision-making. This will be any
> task too general or too expensive to be solved algorithmically, and
> not complex enough to require human intervention. Both borders will
> expand, as it gets cheaper to throw AI at the problem than to go
> through and solve it mechanically, and as the upper bound of decision
> making gets more and more capable.
> I'm afraid I have no real evidence as to how long this period will
> last. It depends entirely on the difficulty of increasing the
> intelligence of the AI, which may reside in design, hardware, and to a
> certain extent, motivation(goal systems are a thesis in themselves,
> ask EY). I suspect, based on my experiences thus far, that early AI
> designs will be very lossy and faulty and poorly optimized for
> increasing in intelligence.
I used to think like that, before I understood that FAI had to be done
deliberately rather than by accident, which meant using only algorithms
where I understood why they worked, working out a complete and principled
theoretical framework into which everything would need to fit, and avoiding
I yearn for the old days when I thought I could just throw cool-seeming
algorithms at the problem, but that's what makes FAI harder than AI.
> This may mean that a complete redesign of
> AI theory will be necessary to get to the next series of plateaus.
> Unless this is simply beyond human capability, there is no reason to
> think this will take any longer than the development of AI theory
> sufficient to get us to this point.
Recursive self-improvement seems to be missing in this discussion. Just a
band of humans gradually improving an AI that slowly acquires more and more
abilities. It makes for a nice fantasy of slow, relatively safe
transcendence where you always have plenty of time to see threats coming
before they hit.
In LOGI, there's also a discussion of plateaus and breakthroughs, but it
forms the background of a graph of ability against
efficiency/hardware/knowledge, *NOT* a graph of ability against *time*.
The graph against *time* has to take into account what happens when we fold
the prior graph in on itself to describe recursively self-improving AI.
> Sometime after this, economic aspirations become fleeting in the
> general upheaval and reconstitution caused by the arrival of another
> kind of intelligence. Some might say this is rather the point of AI
> Projecting into the future is always dangerous. I think that any
> attempt, especially the one above, to characterize the trajectory of
> any technology is doomed to be largely irrelevant. But some choices
> must be made on best available guesses, so here are mine. AI research
> will change a lot of things. In the near term, it will remain a fringe
> activity, and people will still ask the strange question 'what will
> those AIs be good for, anyway?'. But some investors will come, and the
> clearest way I can communicate with them what the goals and value of
> AI research is that it is vastly enabling. I don't know what the first
> task an AI will perform is. I know that it will be something that
> can't be done with anything else. It represents, in the near term, an
> investment in future capability. If money is what you're after
> primarily, I don't know how to defend an investment in AI research
> from the perspective of, say, venture capital. I can point to examples
> of enabling technology, like CAD, or tooling, or electrical power,
> which did not fit into the world they arrived in, but created their
> own industries.
> I'm not saying I can't make up clever uses for AI technologies that
> could make a gazillion dollars, if I had designs for them in my hand.
> There are obvious and clear storytelling ideas. But that would be
> intellectually dishonest.
I wish this were more widely appreciated. And I'm sorry that rationalists
must be penalized for knowing what constitutes a lie.
> I'm looking for a way to express, in terms
> of investment return, what AI is likely to actually do
Turn the solar system into paperclips.
> for us,
*That* takes a little more work.
> in a conservative, defensible sense.
Sometimes the most factually likely outcome is just not something that
sounds all nicey-nicey and sane to the unenlightened. You want to try
drawing a "conservative, defensible" picture of the early 21st century in
> This must be separated from, for example, safety concerns, in which it
> is perhaps useful to imagine, as some do on this forum, what the
> failure modes, what the fastest take off, what the actual capability
> of such developments may be. That isn't helpful in this kind of
This sounds like a complete non-sequitur to me. I'm planning to prevent
the solar system from being turned into paperclips. I don't know of any
other consideration that ought to be overriding that. It sounds to me like
you just took all the obvious considerations that would scrap your essay,
and quickly tried to shove them under the carpet by saying, "But that isn't
helpful in this kind of planning." My good sir, what are you planning to
do, and why should anyone help you with it, if safety concerns (not having
your AI turn the solar system into paperclips) and failure modes (you
seriously think you can get away with not thinking about those, in *any*
essay?) and recursive self-improvement (which you didn't even mention)
don't enter into it?
It sounds like you already know why your essay is indefensible, but you
think that if you admit it really quickly and move on immediately, you
won't have to notice.
I am genuinely confused about what you regard as the point of your essay.
Is it to persuade venture capitalists of a conclusion beneficial to you,
predetermined before you started writing the essay? Is it to determine the
most likely answer on a question of fact on which you are presently
uncertain? Is it to advocate an alternate strategy, compared to the
Singularity Institute or some other line of thinking? It seems that I'm
hearing considerations that would be appropriate to all three purposes.
But the first consideration, at least, is something that shouldn't mix with
considerations two and three. If you're thinking about how to persuade
venture capitalists that they'll make money, you'd better not let even a
shred of thought carry over from that to your Singularity planning.
> I must anticipate a response suggesting that non-profit, private
> efforts to research AI, such as the Singularity Institute, AGIRI, etc
> are better suited for this subject matter, and in fact invalidate my
> queries as relevant at all. I remain very doubtful that this is the
> case. AI is not something to be solved quickly, nor something to be
> solved with few people with no money.
Here, for example, is an answer that sounds like it's appropriate to
arguing for some other strategy than the Singularity Institute pursues.
It's a non-sequitur to the previous paragraph, about why you don't need to
worry about safety concerns - something I'm not clear on, no matter what
you think you're arguing. That wasn't a request for you to think quickly
and rationalize a better reason, by the way, it was a request for you to
give up the fight and start thinking about inconvenient safety questions.
If you believe AI is not something that can be solved with few people, then
figure out a way to do it safely with many people. Don't defend your plan
by claiming that someone else's plan is worse. Your plan has to work for
itself, regardless of what anyone else is doing. As I keep repeating to
myself, having learned the lesson the hard way, "Being the best counts for
absolutely nothing; you have to be adequate, which is much harder." This
is another fallacy that comes of arguing with people - for if you want
funding, you will argue that you are the best effort. Realistically, mere
comparison is probably the most complex issue that we can hope to discuss
on medium as mailing lists. But it doesn't change the necessity of
adequacy. You can't cover a problem in your own project by saying that
someone else has a different problem.
Oh, and another lesson I learned the hard way: You can't say how difficult
a problem is to solve unless you know exactly how to solve it. Saying,
"This problem is NP-complete" means it's knowably hard (assuming P!=NP),
requires at least X cycles to compute. Saying "this problem mystifies and
baffles me" does not license you to estimate the number of people required
to implement a solution once you have one.
> AI is in its first stages of
> real development, and a massive amount of research and data needs to
> be collected, if AI theories are to be informed by more than
> introspection and biological analogue.
If you're not familiar with the massive amount of research and data already
gathered, what's the use of asking for more? We already have more research
and data than a human being could assimilate in a lifetime. I have studied
a narrow fraction of existing knowledge which is nonetheless pretty damn
wide by academic standards, and I have found that sufficient to my needs.
> Like so many things in our
> modern world, AI will be done long before we can properly evaluate and
> prepare ourselves for the results, however long it takes.
Who's "we"? Eliezer Yudkowsky? Leon Kass?
This sort of plaint is not an excuse. Prepare or die. No, it's not easy.
Do it anyway. Am I the only AI wannabe who knows this?
> But people
> need to have reasons to join AI efforts, to fund them, and to support
> them, in levels thus far not seen. I submit this is at least partially
> because this kind of analysis is either not publicised, or has simply
> not been done.
We'll see how much funding your essay generates for A2I2, but I'm betting
on not much. For one thing, it was aimed at an audience that, you seem to
think, currently assumes a hard takeoff; you spend most of your essay
defending the assertion that you'll have enough of a breathing space to
make a profit.
It's not clear to me whether you're trying to write persuasively or settle
a question to fact. In either case, it seems that you're appealing to the
need to persuade people, or something, to explain why you're ignoring
safety concerns. You say, "No one can prepare", then, "But people need to
have reasons to join AI efforts". Why does sentence B follow sentence A?
Why do you presume your audience already agrees that people need reasons to
join (your) AI effort, and that this is an acceptable justification for...
Who are you trying to persuade? Of what? This essay's point is unclear.
> This kind of analysis also raises the rather uncomfortable spectre of
> doubt, that I have jumped into a field of study without sufficient
> research and investigation, or have unrealistic (at least ungrounded)
> expectations for the fruits of my work. I submit that my primary
> interest in AI is at least partially unrelated to gain of these kinds,
> and secondarily informed by the safety concerns, asymmetric potential,
> and increasing importance investigated much more clearly by other
> authors (Vinge, Yudkowsky, Good).
SECONDARILY informed? What in Belldandy's name is your PRIMARY concern?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT