From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Oct 22 2004 - 04:00:13 MDT
Slawomir Paliwoda wrote:
>
>> Here's a thought experiment: If I offered people, for ten dollars, a
>> pill that let them instantly lose five pounds of fat or gain five
>> pounds of muscle, they'd buy it, right? They'd buy it today, and not
>> sometime in the indefinite future when their student loans were paid
>> off. Now, why do so few people get around to sending even ten dollars
>> to the Singularity Institute? Do people care more about losing five
>> pounds than the survival of the human species? For that matter, do
>> you care more about losing five pounds than you care about extending
>> your healthy lifespan, or about not dying of an existential risk?
>> When you make the comparison explicitly, it sounds wrong - but how do
>> people behave when they consider the two problems in isolation?
>> People spend more on two-hour movies than they ever get around to
>> donating to the Singularity Institute. Cripes, even in pure
>> entertainment we provide a larger benefit than that!
>
> Eliezer, I think your involvement in this project has caused you to lose
> a bit of the sense of objectivity necessary to evaluate true options
> included in your thought experiment, and I infer that from your
> question: "Do people care more about losing five pounds than the
> survival of the human species?" What this question implies is the
> assumption that donating to SIAI equates to preventing existential risks
> from happening. Your question has an obvious answer. Of course people
> care more about survival of human species than losing five pounds, but
> how do we know that SIAI, despite its intentions, is on a straight path
> to implementing humanity-saving technology?
Do I have to point out that people spend a heck of a lot more than ten
dollars trying to lose five pounds, based on schemes with a heck of a lot
less rational justification than SIAI has offered? My puzzle still stands.
The possibility of humanity being wiped out seems to have less
psychological force than the opportunity to lose five pounds. No matter
how much we grow, I don't think we'll match the membership or resource
expenditure of any major weight-loss meme. That's just not psychologically
realistic given human nature. Now I do not think that so much resources
should be required. I'll be surprised if we need more than two percent of
the cost of a B-2 bomber. But my puzzle stands.
> What makes your
> organization different from, say, an organization that also claims to
> save the world, but by different means, like prayer, for instance?
Rationality. Prayer doesn't work given our universe's laws of physics, and
that makes it an invalid solution no matter what the morality.
Isn't this *exactly* the same argument that people use against cryonics or
nanotechnology?
> And
> no, I'm not trying to imply anything about cults here, but I'm trying to
> point out the common factor between the two organizations which is that,
> assuming it's next to impossible to truly understand CFAI and LOGI,
> commitment to these projects requires faith in implementation and belief
> that the means will lead to intended end. One cannot aspire to
> rationalism and rely on faith at the same time.
Bayesians may and must look at what other Bayesians think and account it as
evidence. ("Must", because a Bayesian is commanded to take every scrap of
available information into account, ignoring none upon peril of paradox.
Jaynes 1:14.) Robin Hanson wrote a fascinating paper on meta-rationality
which proves from reasonable assumptions that Bayesians cannot agree to
disagree; they must have the same probability judgment once they both know
the other's, both know the other knows theirs, etc. Nick Bostrom and I, on
the way back from Extro 5, tried to split a taxi ride and found an extra
$20 bill in our contributions. I thought the $20 was his, he thought it
was mine. We had both read Hanson on meta-rationality, and we both knew
what we had to do. He named the probability he thought it was his (20%), I
named the probability I thought it was mine (15%), and we split it in 20:15
ratio.
http://hanson.gmu.edu/deceive.pdf
Guessing how much other people know relative to you is not faith, so long
as you pursue it as a question of simple fact, taking into account neither
personal likes nor personal dislikes.
> I've noticed a Matrix quote in your essay, ("Don't think you are, know
> you are"). There is an equally interesting quote from Reloaded you might
> agree with, and it is when Cornel West responds to one of Zion's
> commanders, "Comprehension is not requisite for cooperation." And even
> though I'm convinced that Matrix trilogy is an overlooked masterpiece,
> much farther ahead of its time than Blade Runner ever hoped to be, I
> don't think Mr. West was correct. Comprehension is indeed a requisite
> for cooperation, and as long as you are unable to find a way to overcome
> the "comprehension" requirement, I don't think you should expect to
> find donors who don't understand exactly what you are doing and how.
Which existential risks reality throws at you is completely independent of
your ability to understand them; you have no right to expect the two
variables to correlate. Do you think it the best policy for Earth's
survival that nobody ever support an existential-risk-management project
unless they can comprehend all the science involved? We'd better hope
there's not a single existential risk out there that's hard to understand.
If it requires an entire semester of college to explain, we're doomed.
I've tried very hard to explain what we're doing and how, but I also have
to do the actual work, and I'm becoming increasingly nervous about time.
No matter how much I write, it will always be possible for people to demand
more. At some point I have to say, "I've written something but not
everything, and no matter what else I write, it will still be 'something
but not everything'."
> Let's say I'm a potential donor. How do I know, despite sincere
> intentions of the organization to save the world, that the world won't
> "drift toward tragedy" as a result of FAI research made possible in part
> by my donation? How do I know what you know to be certain without
> spending next 5 years studying?
You guess, choosing a policy such that you would expect Earth to reliably
survive technically intricate existential threats if everyone followed your
rule. It's irrational to allocate billions of dollars to publicly
understandable but slight risks, and less than a million dollars to a much
worse risk where it's harder for a member of the general public to
understand the internals.
The fact that the risk exists and that it's very severe should both be
comprehensible - not easily, maybe, but you should still be able to see
that, rationally, on the basis of what I've already written. And if it's
still hard to understand, what the hell am I supposed to do? Turn a little
dial to decrease the intrinsic difficulty of the problem? Flip the switch
on the back of my head from "bad explainer" to "good explainer"? I do the
best I can. People can always generate more and more demands, and they do,
because it feels reasonable and they don't realize the result of following
that policy is an inevitable loss.
I should point out that your argument incorporates a known, experimentally
verified irrational bias against uncertainty. People prefer to bet on a
coin with known 50% odds, than to bet on a variable completely unknown to
them, like a match between foreign sports teams. From an expected utility
standpoint, you should assign the two bets the same value, especially if
you flip a coin to decide which team to bet upon (a manipulation that makes
the problem transparent). But people have a visceral dislike of
uncertainty; they want to know *all* the details. Even a single unknown
detail can feel like an unscratched itch.
I sympathize, of course. Yay curiosity! Go on studying! But don't hold
off your support until you've achieved a full technical understanding of
AI, because that's a policy guaranteed to doom Earth if everyone follows
it. Though you didn't wait, of course; you are a prior SIAI donor, for
which we thank you. You asked legitimate questions with legitimate
answers, but you still donated - you didn't do *nothing*. I likewise hope
you don't choose to wait on future donations.
The idea that absolute proof is required to deal with an existential risk
is another case of weird psychology. Would you drive in a car that had a
10% chance of crashing on every trip? There's no *absolute proof* that
you'll crash, so you can safely ignore the threat, right? If people
require absolute proof for existential risks before they'll deal with them,
while reacting very badly to a 1% personal risk, then that is another case
of weird psychology that needs explaining.
As we all know, there's nothing worse in this world than losing face. The
most important thing in an emergency is to look cool and suave. That's
why, when Gandalf first suspected that Frodo carried the One Ring, he had
to make *absolutely sure* that his dreadful guess was correct,
interrogating Gollum, searching the archives at Gondor, before carrying out
the tiniest safety precaution. Like, say, sending Frodo to Rivendell the
instant the thought crossed his mind. What weight the conquest of all
Middle-Earth, compared to the possibility of Gandalf with egg on his face?
And the interesting thing is, I've never heard anyone else notice that
there's something wrong with this. It just seems like ordinary human
nature. Tolkien isn't depicting Gandalf as a bad guy. Gandalf is just
following the ordinary procedure of taking no precaution against an
existential risk until it has been confirmed with absolute certainty, lest
face be lost.
I don't think it's a tiny probability, mind you. I've already identified
the One Ring to my satisfaction... and looked back, and saw that I'd been
stupid and demanded too much evidence, and vowed not to make the mistake again.
Imagine if everyone at the Council of Elrond had to call a six-month halt
so they could also go check the archives at Gondor. By all means send for
some Xeroxes and study them at your first opportunity, but get on with
saving the world meanwhile. Sauron waits for no Xerox machine.
> Other questions: Why SIAI team would need so much money to continue
> building FAI if the difficulty of creating it does not lie in hardware?
> What are the real costs?
Extremely smart programmers. Who said anything about needing "so much
money"? I expect to save the world on a ridiculously low budget, but it
will still have to be one to two orders of magnitude higher than it is now.
Hiring extremely smart programmers does take more money than the trickle
we have currently.
> Why the pursuit of fame has now become a just reason to support SIAI?
> Are you suggesting that SIAI has acknowledged that ends justify means?
I think better of someone who lusts after fame and contributes a hundred
bucks than a pure altruist who never gets around to it. I don't think that
counts as saying that the end justifies the means. The other way around:
By their fruits ye shall know them.
> Increased donations give you greater power to influence the world. Do
> you see anything wrong in entrusting a small group of people with the
> fate of entire human race?
I see something wrong with giving a small group of people the ability to
command the rest of the human race, hence the collective volition model.
As for *entrusting* the future - not to exercise humanity's decisions, but
to make sure humanity exercises them - I will use whichever strategy seems
to offer the greatest probability of success, including blinding the
programmers as to the extrapolated future, keeping the programmers isolated
from a Last Judge who can only return one bit of information, etc. Or not,
if I think of a better way.
The alternative appears to be entrusting small groups of people who aren't
even trying to solve the problem with the fate of the entire human race.
That looks to me like a guaranteed loss and I'm not willing to accept that
ending.
> What would you tell people objecting to that idea?
"I'm sorry. Someone has to do something if any of us are going to survive,
and this is the best way I've been able to find. You object but I have not
heard you say a better alternative, unless it is letting catastrophe go its
way unhindered. You can't argue fine points of moral dilemmas if you're dead."
> Do we have the right to end the world as we know it without their
> approval?
There are no rights, only responsibilities. I'll turn the question over to
a collective volition if I can, but even then the moral dilemma remains,
it's just not me who has to decide it.
The question is not whether the world "as we know it" ends, for it always
does, generation after generation, and each new generation acts surprised
by this. The question is what comes after.
> These are difficult questions which perhaps illustrate the difficulty of
> the deceptively simple choice to support, or not to support SIAI.
Deciding whether to try to save the human species is an extremely
complicated question. You can get so tangled up in the intricacies that
you forget the answer is obviously yes.
Lest we lose momentum, would any of SIAI's new donors care to post some
positive remarks on the Today and Tomorrow campaign? Part of the problem
that transhumanist organizations have in organizing, I think, is that when
a new effort tries to launch, we hear from the critics but not all the
people who approve; it creates a bias against anything getting started.
SIAI *is* getting new donors as a result of this effort - though I won't
tell you how many until it's over. It's *your* opportunity, not anyone else's.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT