From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Mon Nov 13 2000 - 22:45:54 MST
Ben Goertzel wrote:
> There are many things to say on this topic, and for starters I'll say only a
> few of them
> I knew Sasha really well, and I think I have a deep understanding of what
> his views were, and I don't think I misrepresented them.
I can accept that.
> This doesn't mean I think Eliezer's views are the same.
> I love the human
> warmth and teeming mental diversity of important thinkers like Max
> More, Hans Moravec, Eliezer Yudkowsky and Sasha Chislenko,
> and great thinkers like Nietzsche – and I hope and expect that these
> qualities will outlast the more simplistic, ambiguity-fearing aspects of
> their philosophies.
I think that "Max More" and "Eliezer Yudkowsky" do not belong in the referent
of "their". I don't think that either of us are simplistic, I don't think
that either of us fear ambiguity, and I don't think that either of us are
> Another line of thinking, which Sasha explicitly maintained in many
> conversations with me -- and Moravec and
> many other libertarians hint at in their writings -- is that the "hi-tech"
> breed of humans is somehow superior
> to the rest of humanity and hence more deserving of living on forever in the
> grand and glorious cyber-afterlife...
> The former line of thinking is one I have some emotional sympathy for --
> I've definitely felt that way myself
> at some times in my life. The latter line of thinking really gets on my
I have no sympathy for the second viewpoint. If Sascha believed it, then boo
> > I also don't believe that the poor will be left out of the Singularity...
> > One of the major reasons I am *in* the Singularity biz is to wipe out
> > nonconsensual poverty.
> > Ben Goertzel, does that satisfy your call for a humanist transhumanism?
> Well, Eliezer Yudkowsky, it does and it doesn't...
> On the one hand, I'd like to say: "It's not enough to ~believe~ that the
> poor will automatically
> be included in the flight to cyber-transcendence. One should actively
> strive to ensure that this happens.
> This is a very CONVENIENT belief, in that it tells you that you can help
> other people by ignoring them...
> this convenience is somehow psychologically suspicious..."
I can actively strive to ensure that the poor are not left out of the
Singularity by building a Friendly AI, such that the referent for Friendliness
treats all humans as symmetrical and such that property ownership within human
society is not seen as relevant to dividing up the Solar System outside of
I used to have the extremely convenient belief that Friendly AI would happen
completely on automatic because morality was objective. I now believe that
Friendly AI needs to be more complex than that, both because morality might be
nonobjective, or it might be objective but specifiable, or it might take more
underlying complexity for the AI to reach objective morality from its starting
point, and also because it may take more underlying complexity to create a
self-modifying AI which is stable during the prehuman phase of development.
Recently, I've been putting considerable work into building a foundation for
Friendly AI. I should also note that I don't believe cyber-transcendence
*itself* can be assumed; I think that humanity could easily be wiped out by
any number of catastrophes. That's why I'm devoting my life to making the
Singularity happen earlier - a benefit that encompasses all of humanity,
including the poor.
So I *am* striving.
> On the other hand,
> a) I'm not really doing anything to actively strive to ensure that this
> happens, at the moment, because building
> a thinking machine while running a business is a pretty all-consuming
> occupation. So to criticize you for doing
> like I'm doing would be rather hypocritical. Of course, once Webmind has
> reached a certain point, I ~plan~ to
> explicitly devote attention to ensuring that the benefits of it and other
> advanced technology encompass everyone ...
I'm working out the docs on Friendly AI now, but let me just say that this
will need to be *before* Webmind becomes capable of self-modification.
*Cough*. Sorry. Back to the moral philosophy...
> b) hey, maybe you're right. I don't KNOW that you're wrong.... Sometimes
> you get lucky and the convenient
> beliefs are the right ones!!
I don't think it is convenient. Ifni knows I wish it were. Life was a lot
simpler back when I was treating objective morality as the only important
branch of reality.
> Saying that the best way to help the poor is to bring the Singularity about
> is definitely an "ends justifies
> the means" philosophy -- because the means involves people like you and I
> devoting our talents to bringing
> the Singularity about rather than to helping needy people NOW....
Who is "needy"? The Singularity isn't just for the poor; it's for *everyone*,
including me. Should the person struggling along on $12K/year feel guilty for
not sending 75% of his income to South Africa? On the scale of the
Singularity, we're all needy, including me. This isn't something I'm doing to
"lend a helping hand to the lower classes"; this is something that I'm doing
because *all* of humanity is in desperate straits, and the only way out is
through a Singularity.
Be it far from me to oversimplify. I acknowledge that this is a complex,
emotional issue. I think that there exists an emotion within us that makes us
feel an obligation to those less fortunate, and a collection of cognitive
drives which lend an intuitional appeal to the philosophy that says that we
have more than enough, and that by failing to give everything we own to
charity, we are trading off an immediate good we could do for some nebulous
future good. But we also have emotional hardware which binds to a different
model of reality; a model in which we are not rich, in which we do *not* have
enough, in which everything that our money can buy is the smallest fraction of
what every human being deserves. A view in which we stand alongside the most
poverty-stricken orphan, equally deserving, simply by virtue of being merely
human. A view under which we're all equally unhappy and we're all in this
together. I think this viewpoint has equal emotional validity.
The viewpoint that says "all concentration of wealth is bad" has emotional
validity, but it's emotional validity which is false - that is, which involves
false-to-fact mental imagery. Producing more wealth requires concentration of
wealth in venture capital. Let us consider the "wealth function" of a
planet. If the bumps in the wealth function just smoothed out, flowing away
with complete liquidity, the planet would not be well-served; the whole Earth
would be a Third World country, without enough wealth in any one location to
create an advanced industrial base. Earth is equally ill-served if wealth is
completely illiquid; bumps in the wealth function may get higher, but without
any of it flowing over... and particularly, without any of it flowing over to
start new bumps.
The happy medium is moderate liquidity of wealth, which translates back into
our own minds as a justified emotional validity for giving part-but-not-all of
your income to charity. If the percentage of income is significant, above
average, and above average for your wealth bracket, then there is no logical
reason for you to feel guilty. If you spend a lot of your time investing
venture capital or CEOing or otherwise creating new wealth, then there is no
logical reason for you to feel guilty.
I don't feel guilty about enjoying what I have. I don't have all that much.
Others have less, others have more. We're all humans and in this together.
And as far as I'm concerned, spending my ENtire adult life trying to save the
world from unspeakable horrors pays all my guilt dues for the next three
> In other words, I'm happy enough to critique Extropianism as a collection of
> words written down on paper,
> a conceptual philosophy.... I'm happy enough to critique my friends, like
> Sasha, whom I know well (and I know he
> wouldn't mind me picking on him even after his death, he loved this kind of
> argumentation). But I have no desire
> to pass judgment, positive or negative, on Eliezer and others whom I don't
> know.... I guess this makes me a weirdo, but that's not really news now is
Basically, I'm saying that Social Darwinism isn't part of Extropianism; it was
Sascha's private opinion. That's the part of the FAZ article that I'm
objecting to. I think you got a mistaken picture of Extropy from Sascha and
then passed it on to the FAZ readers. I totally understand if that was an
honest mistake - that's why I'm writing this response, isn't it?
Ben Goertzel wrote:
> to be totally frank, I have to admit to being a bit too sensationalistic in
> that article, in order to
> get the publisher excited. Hey, I'm human too... as of now at any rate ;>
Holding the sensationalism invariant, I would like to have seen something
along the lines of:
"But there's a streak of Social Darwinism in some Extropians, and that worries
me. I don't want to imply that all Extropians are Social Darwinists; some
explicitly see the march of technology as a means of combating poverty.
Nonetheless, I think that part of the appeal of libertarian transhumanism
rests on an over-simplification, a neat moral counter to guilt, that says that
people get what they deserve. The Extropians don't seem to be overly
concerned about whether the humans below the poverty line will get to
participate in this wonderful world of theirs. To be fair, some of them think
that today's differences in wealth will be smoothed over or wiped out entirely
by the changes ahead, or that the new world will be a vast improvement even
for the futuristic poor. I don't think that's enough. I don't think that
universal participation, or a universal chance to participate, is automatic,
and it worries me that those most intimately involved with the future seem to
have lost some essential, human compassion."
You'd still be dead wrong, of course, but you'd be much less dead wrong. In
particular, you would be explicitly saying that any Social Darwinism is an
undiscovered worm eating at the heart of Extropy, not a toast publicly made at
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT