From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 22 2001 - 00:11:21 MDT
(Now *this* post is legit content for SL4.)
==
Some days I wonder if maybe I've toned it down a little too far.
The practice of hyping a project's goals beyond what it can reasonably
achieve is so universal that one forgets the peril of understating the
goals. If you have a small goal presented as a great one, then people are
liable to overestimate the rewards, and be disappointed. But if you have
a great goal presented as a small one, then people are liable to
underestimate the efforts required to achieve it. I think this is what's
happening on SL4.
There was a time when I thought that the Singularity would occur in 2030,
the product of a vast Manhattan Project, enabled by the successor
to the Web - a global cooperative effort by a substantial sector of the
human species, fully understanding of what was at stake in the Singularity
and willing to donate time, effort, and money to achieve it.
In retrospect, I thought this way because I didn't understand how to solve
the problem. If you don't understand how to solve a problem, then you
imagine yourself overcoming it by brute force. This is not to say that
the brute-force strategy would fail. Even so difficult a problem as AI
could probably be overcome by brute force in the year 2030. I am not
saying that imagining brute force is fantasy or unworkable; rather, I am
saying that you only imagine it when you don't have a good image of how to
solve the problem.
This concept changed when I got to understand AI better. I learned more
about how large-scale development projects worked, and encountered the
concept of open source. It also occurred to me that 2030 or even 2020 was
perhaps too conservative, given the rate of increase of *networked*
computing power.
This was during the period when I wrote the first proposals for a
Singularity Institute. I started thinking about whether an open-source AI
project could bring in the necessary resources without a Manhattan
project. I visualized the birth of an AI industry, driven by the use of
open-source AI for its most natural application; programming and source
code. Flare played a much larger part in my thought then than it does
now, since at first I thought in terms of a vast Internet distributed
computing network as the enabling condition for the birth of seed AI,
which implied a programming language in which people could write
distributed code, which meant that such a programming language would
possibly even be on the *critical* path to Singularity.
However, I soon also realized that shifting the time horizon to the
vicinity of 2010 probably didn't leave much time for people to get used to
the idea of a Singularity, and that running the project distributed would
mean that the initial stages of the hard takeoff would be easy to observe
with standard network monitoring tools, quite possibly triggering a panic
and disaster at the last minute. Even if 95% of humanity was on board
with the Singularity in theory, the enormous disruption that could result
if people saw it was Actually Happening could quite easily kill a huge
number of people at a point when it would be incredibly tragic and
futile. So I dumped the fairly large amount of material about distributed
computing, and the final version of my writings made reference to a
privately owned supercomputer, or a rented supercomputer, rather than a
SETI@Home project. That's not to say that I don't think having an "in"
with the distributed-supercomputer industry would be nice; I still think
so, but I no longer rely on it.
More time passed. I started writing GISAI (then CaTAI 2.0), and studied
up some more on the dynamics of open-source projects. I realized that
open source and the idea of birthing an AI industry had also been
brute-force approaches to the problem. Again, I'm not saying that these
approaches wouldn't have worked. They probably would have. (Although
whether they would have worked for *Friendly* AI was another issue. Back
then, I hadn't realized that Friendly AI would be necessary.) But pouring
a vast amount of open-source intelligence into the project, or throwing an
AI industry at the problem, was still postulating massive quantities of
external brainpower; it was still, fundamentally, a "brute intelligence"
Singularity strategy.
Once I had an idea of what *specifically* needed to be done to build a
complete mind, rather than just saying "It's a Quest, let's throw
resources at the problem," I also realized that open source and an AI
industry might not help all that much. The old strategy was a gradualist
approach to the quest for AI, composed of slow incremental steps. This
makes the path more solid, but much slower, and (as I also began to
realize) considerably less compatible with Friendly AI. The kind of
things that would need to be done to build seed AI *directly* weren't very
susceptible to open source, and would not be much benefited by the
existence of an AI industry except indirectly. The massive-brainpower
strategy would have worked eventually, as the result of the slow buildup
of knowledge, but it would not have been the *direct* path. Of course, a
direct path would require a much deeper understanding of the problem and
would require deeper coding. On the plus side, though, the expected
efforts needed to implement the Quest shrank yet again, this time all the
way down to the level of what a single large project might be able to
implement.
And that, I think, is how I wound up toning it down a little too far.
Because now people have no way of realizing that we are, in fact, embarked
on a Quest. The people who recall when the Singularity Institute was an
impossible dream may still remember how nice it was, and/or scary, to hear
that SIAI had been incorporated. But some of the people here will have
first encountered the Singularity Institute as a fait accompli. People
may have first heard us talking about 2010 and thought we were just being
overoptimistic, not realizing how much mental effort went into figuring
out how to make it "2010" instead of "2020" or "2030".
In retrospect, I think this is why some of the posters on SL4 seem to be
perceiving SIAI as an overambitious AI project, rather than as an
ultraleveraged way of implementing the last great crusade in human
history. The Singularity Institute needs to be seen in context, and the
context is the Singularity. I mean, THE SINGULARITY. The beginning of
human history. We would very much like to do it in ten years, but if it
takes twenty years to do it then we will SPEND twenty years, because this
*is* a crusade and the entire planet is at stake.
We are an AI project because we think that's the fastest way to the
Singularity. We are not a group that *started out* as an AI project and
then got too big for their britches; we started out as Singularitarians
and *then* decided that AI was a good way to do it. In the process, I
wonder if maybe we haven't gotten too small for our britches. Maybe we've
forgotten that we need to actually *tell* people about the crusade part of
it, or they won't know.
If I didn't think Flare had something to do with the Singularity, if Flare
were Yet Another Programming Language, then it would not be on my
horizons. Flare is interesting to me because I think that the art of
programming is stuck in a rut, a rut worn by developing our basic thought
patterns through programming on machines that we would today regard as
abacuses - machines where efficiency was more important than the
programmer's time. XML (offtopic, I admit, but still illustrative) came
along when people finally said, "OK, we HAVE the disk space, we have
gigabytes and gigabytes of disk space, we have disk space to spare, what
we don't have is the patience to decode a thousand incompatible entangled
binary formats." Flare is intended to take the same step for programming
languages. You can see it in, for example, the idea of a FlareSpeak IDE
that is not irrevocably bound to be plaintext. You can see it in the idea
of being able to annotate every Flare element; in the idea of tracking
two-way references; in the idea of XML program files; in the idea of using
an extensible-tree representation for the interpreted program; and so on.
What's written up on Flare is not the whole shebang. It's a fast set of
documents I emitted over the course of a few days because someone
volunteered to lead the project and I wanted to find out how many other
people would be interested in supporting. Somewhere along the line, I
seem to have forgotten to write down that Flare is a Quest. It is unlike
seed AI in that Flare is a quest I can hand off to someone else. But it's
still a Quest.
If it takes five years for Flare to have an impact, then the moral is that
we had better start today. If I don't plan on a timescale longer than
five-to-twenty, well, I don't plan on a shorter timescale either. I
expect the Quest for AI will still be continuing five years from now
unless we have one heck of a run of good luck. Should I be planning on
still having, five years from now, more or less the same programming
languages that we have today? Or should I be hoping to see at least a few
steps into the design space that Flare is intended to open up?
If SIAI can undertake the Quest for Singularity in the form of a single AI
programming project, then more power to us, since it means we'll have
found a really small and fast and efficient way of achieving the
Singularity. It means we'll have a strategy that doesn't require us to
grow to the size of the Bill and Melinda Gates Foundation, even though our
goals are a heck of a lot more ambitious than theirs. But I am still
going to be asking for certain things in support of the Quest for
Singularity, a.k.a. the AI implementation project, that I would not be
asking for if I were trying to implement a billing system for my local
grocer.
The Quest does not get any smaller. You may be able to come up with a
better way of achieving goals, but you can't compromise on goals. You
cannot downsize the Quest to fit available resources. The most you can do
is delay what you do today because you hope to have more resources
tomorrow. If you need more resources, then part of the Quest becomes
finding those additional resources, and you put in whatever efforts are
required to get those additional resources. If you don't get those
resources, then you fail to bring about the Singularity, and either
someone else does the same Quest, or the human species dies. Pointing out
SIAI-specific problems is one thing, but I'm continually thrown off-stride
by people who casually say "No Singularity effort will ever succeed
because of such-and-such" without realizing that they've just passed a
death sentence on the human race. It's always possible that humanity is
just screwed, plain and simple, so such comments have their place, but I
still wish people would remember to say "You'll never succeed because of
such-and-such, and therefore we're probably all going to die"; this would
tell me that they have thought their criticism through in some detail.
When I am talking about building AI, I am talking about something that
just four years ago I would have thought of as a project to be done in
2020 or 2030 with the assistance of a significant fraction of the human
race. This is because I did not understand AI and therefore planned on
brute-forcing it. The fact that I now think "Yes, we can start today"
does not mean that I expect it to be easy or that I expect it to work out
as an ordinary programming project would. We are not talking about an
ambition that started out as "write this cool program" and expanded to
form a nonprofit; we are talking about a Quest that started out the size
of the entire planet and eventually got analyzed and leveraged to the
point where a single nonprofit could do it.
One of the tools I would like to have on the AI Quest is a better
programming environment, by which I mean, "Programming tools a whole world
beyond the abacus-originated morass we are stuck in today", and not just a
handful of features added to some current IDE. Now, it is perhaps not
reasonable for someone implementing the billing system at the grocery
store to demand significant advances in the art of programming; but SIAI
needs to create an anachronism, a program born years out of time, and I
very much want better tools. In fact, I want anachronistic tools.
I realize that not all of this is visible in what I have already written
up on Flare. That's because I'm supposed to be handing this off. It's
not my job to write the "Creating Friendly AI" of Flare. What's out there
is a few ideas tossed out to get people interested. And I bashfully
confess that what's out there is pretty neat. But if it were just "pretty
neat" then I would never have gotten involved. There are dozens of neat
ideas that I don't do anything with because they are not
Singularity-related neat ideas. The Causality design pattern is seeing
the light of day for the first time as a footnote to Flare, even though
the Causality design pattern is really cool, because humanity's adoption
of the Causality design pattern is not on the direct path to Singularity.
Creating the next step in programming languages is not a trivial thing to
do, but it is something that has been done successfully several times
before, with incremental improvements smaller than the ones already
visible in the online material on Flare. Creating the next programming
language is certainly less difficult than the Quest for AI.
Flare considered as an isolated set of improvements, and as a possible
language for SIAI work over the next couple of years, does not perhaps
have Questlike quality - although make no mistake, it certainly has
projectlike quality; an initial Flare implementation would be extremely
useful and those first steps will be a lot more difficult without it.
Flare considered as a *first* step does have Quest-nature. The most
important *long-term* qualities of Flare are not the immediately powerful
features, such as invariants and two-way references. The most important
characteristics of Flare are the subtle qualities that make Flare a first
step into a larger design space that is then exploitable by a series of
incremental improvements. Moving beyond IDEs *necessarily* bound to
plaintext is a subtle first step, even if the first FlareSpeak IDE happens
to be entirely plaintext. Having the programming language represented as
XML (*extensible* tree structures) is a subtle first step. Having trees
of control instead of threads of control (you haven't seen this part yet)
is a subtle step. We may need to compromise on some
subtly-futuristic-but-not-immediately-powerful aspects in order to deliver
a Visibly Cool, 50%-right version of Flare in reasonable time, but those
aspects of the language will still be present in the background.
I have been focusing on the short term, and the things I want Flare to do
for the AI project in the next year or two, because hey, that's what'll
happen first, and that's what I should be spending most of my time
thinking about. Even if the "subtleties" fail to trigger any future
improvements, the Singularity Institute will still be able to write
self-watching annotative code and the first steps of the AI project will
be a lot easier.
But there's also a Quest component to Flare, where if it takes four years
to do it, then we will start today instead of 2004 and have it ready in
2005 instead of 2008. Flare is not superhumanly difficult (the AI side
*is* superhumanly difficult), and Flare can be forked off as an
open-source project, so it doesn't have to be too much of a distraction
for SIAI. And if you insist on raising up those distracting
"practicality" issues, then I would quietly point out that Flare, *not*
being superhumanly difficult, and not needing to be closed-source, is
something that SIAI can make well-defined progress on and cite in the
directly forseeable future as an accomplishment involving written, visibly
functioning code that does something immediately perceptible as cool.
I emphasize that that is not the primary purpose of Flare. There is
something pathetic about wanting to do the impossible and important and
settling for the easy and pointless. Rest assured that we are not going
down that road; we are still Singularitarians, and we won't be distracted
by something other than creating AI just because creating AI happens to be
extremely difficult. Flare *is* relevant. But the fact that Flare is a
relevant project that we can begin immediately with our (nearly
nonexistent) free resources, accomplish visible progress on in bounded
time, and release to the entire world without fear of abuse, should not be
overlooked. It's one of the reasons why Flare was originally described as
the *first* project to be initiated by the proposed Singularity Institute.
==
Sincerely,
Eliezer.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT