From: Ben Goertzel (ben@webmind.com)
Date: Thu Aug 02 2001 - 05:19:57 MDT
Brian, I think your analysis of the peculiarities of Kurzweil's world-view
is spot-on, and I also appreciate your clarification that Kurzweil himself
is not the point here -- he's just a particular example of the general
phenomenon that even people who see what's coming, for some reason only are
able to see 90% of it. This is related to what, in my question to Kurzweil
at his talk, I called "hard takeoff denial". What you're pointing out is
something like "Singularity malleability denial" -- refusal to confront the
fact that we, by our intentional actions, may significantly morph the
outcome of this huge transformation that's coming. The combination of these
denials is what I less generously referred to in my post as a kind of
"narrowmindedness", but you said it more pleasantly and more clearly.
Personally, I don't share the confidence of some that the Singularity will
necessarily be good for the human race. I think it has the potential to be
great for us, and also the potential to exterminate us. I'm with Eli, in
believing that we need to specifically work to make it good. I don't
entirely agree with him on the specific AI-engineering mechanisms that will
succeed in this regard, but this is a pretty minor quibble in the big
picture (and perhaps he'll bring me around to his view once he's articulated
it more clearly and fully).
Another point is that there are LOTS of other people who basically see it
our way, but don't feel it's useful to post and read a lot of e-mail
messages about the topic. For instance, most of my Brazilian WM development
team sees these things the same way, roughly, as me and Eli and Brian, but
they prefer to spend their time working on computer science rather than
chatting about philosophy. I felt the same way, and was working toward the
same goals *before* I started posting on this list. So don't assume that
just because some big spokespeople don't fully "get it", those of us who
babble about it a lot are the only ones who get it!!
-- Ben G
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Brian Atkins
> Sent: Thursday, August 02, 2001 1:19 AM
> To: sl4@sysopmind.com
> Subject: Re: Article: The coming superintelligence: who will be in
> control?
>
>
> "Amara D. Angelica" wrote:
> >
> > Brian, an intriguing idea. Can you or anyone else elaborate?
> >
> > > -----Original Message-----
> > > From: owner-sl4@sysopmind.com
> [mailto:owner-sl4@sysopmind.com]On Behalf
> > > Of Brian Atkins
> >
> > > going beyond trend tracking and guessing what will happen.
> I'd like to see
> > > people take the issue farther and try to figure out a) can we
> manipulate
> > > the timing and character of the Singularity significantly b)
> if so, should
> > > we accelerate it?
>
> I'm not sure if you wanted me to elaborate on the fact that no one is
> talking about this issue, or to elaborate on possible answers to A and B.
> I'll do the former, because as you can see below I think Ray and the
> people on this list have already decided for themselves that the answer
> to B is Yes. The answer to A IMO is Yes, and I think Ray would also agree
> with that at least to a limited extent. What I am frustrated by is seeing
> people realize that both A and B are Yes, but then not helping to push the
> Singularity closer to us in time. So I'd like to get the word out about
> this issue, since (again IMO) this is even more important than simply
> realizing a Singularity is coming. Once you see it's coming then you have
> to go a step farther and "pick a side". Sitting on the fence like
> a reporter
> (no offense :-) is not a rational choice in this situation. If you believe
> the Singularity will be good (and I mean good in the sense of saving your
> life, not in the sense of whether congress lowers your tax rate
> 3%) for you
> then you should try to advance it. If you believe the Singularity will be
> bad for you, then you should try to prevent it. This is a world changing
> bit of history.
>
> Here's some examples of the "blindspot" that I'm talking about. Go look at
> part 4 of the Extro-5 Kurzweil talk here:
>
> http://www.kurzweilai.net/meme/frame.html?main=/articles/art0235.html
>
> skip ahead to around 13:15 at which point Eliezer asks Ray something
> along the lines of "Well, you've described the Singularity and our
> progress to it so far, but you haven't said what kind of Singularity
> you would like to see or what time you would /prefer/ it to happen".
> Then Ray sits there for like 7 seconds (which makes me think he might
> not have thought about this much) before someone in the back says
> something that causes him to then go off on a tangent without answering
> the question. Very frustrating since that was the one question I wanted
> an answer to! :-)
>
> If you or anyone else here has ever seen him address this I'd like to
> know about it. He seems to have chosen a clinical observer style when
> it comes to the Singularity, which lets him make predictions about what
> the future might be like, but yet not consider the fact that someone
> with his excellent grasp of the situation would be exactly the right
> kind of person to help guide and support the actual development. Here's
> a quote from his book precis:
>
> "Technology will remain a double edged sword, and the story of the Twenty
> First century has not yet been written. It represents vast power to be
> used for all humankind's purposes. We have no choice but to work hard to
> apply these quickening technologies to advance our human values, despite
> what often appears to be a lack of consensus on what those values should
> be."
>
> Confusing to say the least, unless he's running a secret AI or brain
> scanning project we don't know about :-) He advocates working to
> achieve the
> Singularity, and points out that the history is not made yet, but provides
> no advice (outside of some possible future scenarios) on what might be the
> best way to achieve it, whether we should try to accelerate the arrival of
> it (is it "ethical" to accelerate the Singularity), and how we might do so
> if we do decide we want to accelerate it.
>
> He goes on to talk about the purpose of life which he sees as evolving
> to the Singularity. He says that, but then almost immediately goes back
> towards pointing out that the real reason the Singularity will happen
> is due to economics. However, if he really feels like achieving a
> Singularity is the goal of life, then I'd like to ask him: what are your
> plans for after you finish your book? How will you help to achieve this
> goal? I don't see any answers to that, or even anyone else asking this
> question of themselves (besides people who hang around here) or Ray.
>
> He ends his precis with the admonition that we all should "stick around
> so you might see the Singularity". Which really clenches it for me- if
> he had really internalized the "Singularity is the goal of life" idea
> then he would instead be telling people to go out and help get the
> Singularity here more quickly so that even more people now living will
> be able to survive to see it.
>
> Now I don't want to look like I'm stuck on Ray. He just is the most
> glaring example to me because I've read so much of his stuff lately.
> He also is the biggest proponent of the Singularity in the mainstream
> world, and yet still seems to be missing the final pieces of the picture.
> --
> Brian Atkins
> Director, Singularity Institute for Artificial Intelligence
> http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT