Fwd: Re: How Kurzweil lost the Singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 19 2002 - 13:05:13 MDT


attached mail follows:


Anand,

Yes, I would like to respond to Eliezer. You're welcome to post this
response on your mailing list:

I think that Eliezer is misunderstanding my statements, intentions, and
efforts. First of all, he states "Kurzweil's entire *being* is directed
toward predicting the Singularity - *not* nudging the Singularity in any
direction." The fact is that the bulk of my efforts are involved in
technology creation efforts, with only a portion devoted to talking and
writing about technology. I do believe that as technologists, we have an
ethical responsibility to apply our efforts in ways that will promote
positive human values, albeit that we don't always have a consensus on what
those are. Most of my efforts have been devoted to developing technology
for persons with disabilities, and towards enhancing human expression in
areas such as music, and I do give a high priority to considering the impact
that technologies I'm involved in creating will have on society.

I am familiar with Eliezer's efforts at defining and articulating ways that
we can promote what he calls "friendly AI," and I applaud his concern and
efforts in this direction. By itself, I don't believe that such efforts are
sufficient, and Eliezer would probably agree with this. I don't think that
we have enough knowledge today to define a reliable strategy to assuring
that AI (or other advanced technologies) will remain "friendly," but the
dialogue on how to achieve this is certainly worthwhile and not premature.
It's an effort we will need to maintain and intensify, particularly as we
get closer. I have said many times that these technologies are advancing on
many fronts, and I believe that a critical aspect of assuring that these
future technologies are helpful rather than harmful is that everyone
consider and apply ethical issues in every project and in every decision.
There's no one "magic bullet" strategy that is going to assure that we avoid
catastrophic downside scenarios. I do agree, however, that it is not too
early to define these downsides and to develop multiple strategies towards
this end.

So in summary I believe that Eliezer's efforts in this direction are
important and worthwhile. However, he is not correct that I am unconcerned
with this critical issue. I've said on many ocassions that it's the number
one challenge facing our civilization in the 21st century.

Ray Kurzweil

-----Original Message-----
From: Anand AI [mailto:trans_humanism@msn.com]
Sent: Sunday, June 16, 2002 4:53 AM
To: ray@kurzweilai.net
Cc: sentience@pobox.com
Subject: Fwd: How Kurzweil lost the Singularity

Ray,

I thought you may be interested in reading, and possibly replying, to this
SL4 (mailing list) message by Yudkowsky. The SL4 replies to this message can
be found at http://sysopmind.com/archive-sl4/current/, under the header of
this email's subject.

Permission has been received to forward this message. In doing so, I do not
necessarily agree with the opinions expressed within it.

Best wishes,

Anand

----- Original Message -----
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
To: "SL4" <sl4@sysopmind.com>
Sent: Saturday, June 15, 2002 11:39 AM
Subject: How Kurzweil lost the Singularity

Eliezer Yudkowsky wrote:
>
>Ben Goertzel wrote:
> >
> > Kurzweil, however, IS putting effort into helping people understand the
> > Singularity.
> >
> > And I'm sure that part of his motivation for doing this, is a desire to
> > nudge the Singularity in a better direction. A direction not too
> > thoroughly polluted by peoples' fear and uncomprehension.
>
>
>Ben, to the best of my ability to understand it, Kurzweil's entire *being*
>is directed toward predicting the Singularity - *not* nudging the
>Singularity in any direction. On every occasion in which I have spoken to
>Kurzweil, the concept of influencing the Singularity in any way is met
>with blank incomprehension. As far as Kurzweil is concerned, he wins
>the argument when he convinces the audience that the Singularity will
>happen.
>
>Any conceptual model of the Singularity that allows for individual actions
>to accelerate or influence the Singularity is seen by Kurzweil as a
>weakness in the argument, because it appears to argue that "the
>Singularity requires individuals to do such-and-such." Kurzweil will
>always argue for the creation of AI based on neuroanatomical modeling
>of all cortical areas, and will never admit that a general understanding
>of intelligence is necessary or even that it could speed up the process,
>because in the current scientific environment it is easier for Kurzweil to
>defend the proposition that neurocomputational modeling is possible than
>it is for Kurzweil to defend the proposition that an understanding of
>intelligence is possible. As for the idea that "We can do this using
>neurocomputational modeling, and therefore the Singularity is provably
>possible, but an understanding of intelligence may allow us to build AI
>earlier without reverse-engineering the brain" - why, that's too complex
>for Kurzweil to explain on television. So it doesn't get said. It doesn't
>get defended. Ever. It's easier for Kurzweil to present a model of the
>Singularity in which *only* reverse-engineering plays a role, and so his
>thoughts appear to have conformed to the worldview that will let him
>win arguments in the current memetic environment.
>
>Kurzweil has, deliberately or inadvertantly, accepted constraints upon his
>thinking which prohibit his model from corresponding to reality, and which
>prohibit him from accepting any role for individual action in the
>Singularity.
>
>Kurzweil believes in the inevitability of his curves, not in activism.
>Kurzweil wants to believe in the benevolence and inevitability of the
>Singularity and any argument of the form "You can do X and it will improve
>your chances of (a Singularity) / (a positive Singularity)" appears to him
>to be a vulnerability in his argument: "The Singularity *could* (go
>wrong) / (not happen) if not-X." Kurzweil will therefore argue against
>it. Kurzweil's entire worldview prohibits the possibility of Singularity
>activism.
>
>In fact, having watched Kurzweil debate Vinge, I've come to the conclusion
>that Kurzweil's worldview prohibits Kurzweil from arriving at any real
>understanding of the basic nature of the Singularity. Over the course of
>my personal interaction with Kurzweil, I've seen him say two really
>bizarre things. One was during the recent chat with Vinge, when Kurzweil
>predicted superhuman AI intelligence in 2029, followed shortly thereafter
>by the statement that the Singularity "would not begin to tear the fabric
>of human understanding until 2040". The second really bizarre thing I've
>heard Kurzweil say was at his SIG at the recent Foresight Gathering, when
>I asked why AIs thinking at million-to-one speeds wouldn't speed up the
>development of technology, and he said "Well, that's another reason to
>expect Moore's Law to remain on course."
>
>These statements are so absolutely bizarre that, after pondering what
>Kurzweil could have been thinking, I've come to the conclusion that what
>Kurzweil calls the "Singularity" is what we would call "the ordinary
>progress of technology." In Kurzweil's world, the Grinding Gears of
>Industry churn out AI, superhuman AI, uploading, brain-computer interfaces
>and so on, but these developments do not affect the nature of
>technological progress except insofar as they help to maintain Kurzweil's
>curves *exactly on track*. What we, and Vinge, call the "Singularity" are
>the events that grow out of transhuman intelligence however and wherever
>it arises; industry is of interest to us only insofar as it leads up to
>that point. What Kurzweil calls the "Singularity" is the inevitable,
>inexorable, and entirely ordinary progress of technology, which, in
>Kurzweil's world, *causes* developments such as transhumanity, but is not
>*changed* by transhumanity except in the same ways that industry has been
>changed by previous technological developments.
>
>What Kurzweil is selling, under the brand name of the "Singularity", is
>the idea that technological progress will continue to go on exactly as it
>has done over the last century, and that the inexorable grinding of the
>gears of industry will eventually churn out luxuries such as
>superintelligent AIs, brain-computer interfaces, inloading, uploading,
>transhuman servants, and so on. The gears of industry will then continue
>grinding at exactly the same pace, producing more and better
>superintelligent AIs, more and better transhumans, and so on. Kurzweil's
>timeline for Moore's Law continues unblinkingly from "Human-equivalent
>brainpower costs $1000" to "1000 brainpower costs $1000" a decade
>later. Kurzweil is not defending what we would regard as the Singularity;
>he is defending the idea of ordinary technological progress. As part of
>defending the inevitability and desirability of the Turning Gears of
>Industry, Kurzweil also defends the idea that the Gears of Industry will
>churn out transhuman technologies, and the idea that the transhuman
>technologies churned out by the Gears of Industry are safe, desirable
>luxuries. It so happens that one of the branches of Kurzweil's
>worldview - the production of transhuman intelligence - is known to us
>as the "Singularity". But Kurzweil's worldview does not contain any of
>our beliefs about the consequences and nature of transhuman intelligence.
>
>On the whole, Kurzweil's actions are probably a net benefit to the
>Singularity. Kurzweil is promoting a safe, sanitized, comparatively
>unalarming, optimized-for-defensibility meme, under the brand name of
>"Singularity", which bears a surface resemblance to the real concept of
>the Singularity as created by Vernor Vinge and preserved here. People
>who become interested in Kurzweil's pseudo-Singularity may go on to
>google on "Singularity" and subsequently end up at the Singularity
>Institute. People who learn to love transhumanity as a consequence of
>the Inexorable Gears of Industry may choose to take on transhumanity as
>a personal goal.
>
>But:
>
>1) Kurzweil's positive effects on the Singularity are an accident.
>Unless he is being deliberately dishonest, the positive consequences of
>his actions are unintended consequences.
>
>2) Despite his much greater potential to make a difference, it currently
>seems that Kurzweil will go on playing the role of a celebrity
>spokesperson, nothing more. His outlook prohibits him from seeing the
>possibility of influencing the Singularity in any way.
>
>3) Kurzweil's model is wrong enough that I cannot ethically help spread
>it. Kurzweil is providing a safe, sanitized, easily digestible view of
>something that is *not* ordinary. He is not being dishonest, but it would
>be dishonest for *me* to help spread ideas that I know to be attractive
>but untrue.
>
>At present Kurzweil is neither using his resources to accelerate the
>Singularity (in his capacity as an entrepreneur), nor even urging others
>to do so (in his capacity as an author). I therefore question whether we
>should be lined up around the block to congratulate Kurzweil on his
>altruism, until he either (a) calls in his next book for college students
>to enter Singularity-related professions or (b) throws a few bucks the way
>of neurocomputational modeling research. Right now Kurzweil appears to
>be a man with an idea that he believes is true. So he writes books about
>it, speaks publicly about it, uses his celebrity status to promote it, and
>in turn gains greater prestige and celebrity status as the idea comes to
>be associated with him. In this, Kurzweil is no different from anyone
>else with an idea. This does not make Kurzweil a bad person, but it
>doesn't make him Gandhi either. And it does not mean that Kurzweil is
>out to accelerate or improve the Singularity, either directly or
>indirectly.
>
>We are people with a cause, and our cause bears a vague resemblance to
>Kurzweil's idea, but we would be in error to try and see Kurzweil as a man
>with a cause. Currently, Kurzweil is a man with an idea. I wish I knew
>how to nudge people with ideas into becoming people with causes.
>
>--
>Eliezer S. Yudkowsky http://intelligence.org/
>Research Fellow, Singularity Institute for Artificial Intelligence

_________________________________________________________________
Join the world’s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT