From: Robin Lee Powell (email@example.com)
Date: Wed Dec 31 2003 - 12:59:04 MST
On Wed, Dec 31, 2003 at 09:40:08AM -0800, Michael Anissimov wrote:
> Robin, this is an interesting and entertaining essay!
> Congratulations on getting the motivation to write down some of
> your ideas and reasoning regarding the world-shaking issue of how
> humanity ought to approach the Singularity.
> I disagree with the way you present/argue some things though, so
> here I go with all the comments:
> 1. Why do you call Singularitarianism your "new religion"? I
> know it's basically all in jest,
No, it's not.
I define religion as a set of beliefs held in absence of evidence or
proof. Singularitarianism fits perfectly, regardless of how much I
might wish otherwise.
> but thousands of people have already misinterpreted the
> Singularity as a result of the "Singularity = Rapture" meme,
Yeah, I really haven't found any way to present the ideas without
invoking that comparison. However, bear in mind that the reason
everyone makes that comparison is because it's a perfectly *valid*
one. Seriously. The only substantial difference between the
singularity and the rapture is that no-one involved in the
singularity claims to have had a vision from god.
> and I don't think they need any more encouragement. I would
> personally prefer that Singularitarians have the reputation of
> being extremely atheistic and humanistic.
I *am* atheisitic and humanistic. But that doesn't change the fact
that I have no proof, and barely any evidence, that the singularity,
let alone the sysop scenario, will actually come about. That makes
> 2. Like Tommy McCabe, I too have a problem with the "FAI means
> being nice to humans" line. This gives a lot of people the
> mistaken impression that FAI is going to be anthropocentric,
See my respons to him.
> 3. This is a fun paragraph:
> "Combining a few issues here. I believe that strong
> superintelligence is possible. Furthermore, I believe that to
> argue to the contrary is amazingly rank anthropocentrism, and
> should be laughed at. Beyond that, I think full AI is possible.
> It's the combination of the two that's interesting."
> I agree that people who believe strong superintelligence is
> impossible are memetically distant enough from Singularity-aware
> thought that trying to avoid offending/confusing them is
> pointless. Saying that the combination of the two is what's
> interesting unfortunately gives the reader the impression that AI
> and strong superintelligence in concert is the only thing capable
> of initiating a Singularity (when self-improving IA seeds are
> indeed possible, albeit unlikely.)
IA? Is that a typo, or a term I'm unfamiliar with.
> It might cause readers to mistakenly overestimate the safety of
> the IA path. The Singularity is complicated and confusing enough
> that little wording issues such as these can actually influence
> how the paper is interpreted by casual surfers (if that matters.)
My point was that sentience in a computer substrate allows easy
self-modification; I've edited that paragraph a bit, please let me
know if it's more clear.
> 4. It seems like you're saying the range of possible
> Singularities basically breaks down into either "seed AI" or
> "uploading", when other IA techniques are indeed possible.
> Pre-uploading technology could probably be applied to yield
> substantial human intelligence enhancements, even though AI would
> almost certainly come before that as well.
There are probably other ways to produce superhuman intelligence,
but I honestly don't think that any of the interesting ones (i.e.
ones that produce intelligences meaningfully smarter than every
human has ever been) are likely to come before superintelligent AI,
and a comparison of possibilities is beyond the scope of this essay.
> 5. " Source code, /any/ source code, is a paragon of clarity by
> comparison." gives the audience the impression that you are
> worshipping code. :) Of course code will be "clearer" in a
> mathematical sense but "paragon of clarity" in the sense of "it
> works cleanly" would take a lot of programming effort, of course,
> and not any code would qualify.
Note the 'by comparison'. Added: "Only by comparison, of course;
there's some <em>really</em> bad source code out there."
> 6. "You see, Eliezer <http://yudkowsky.net/beyond.html> has
> convinced me that a Friendly AI must be the first being to develop
> strong nanotech on Earth, or one of the first, or we are all going
> to die in a mass of grey goo." makes you sound like a cult victim,
> unfortunately. :(
One of the comments I've made to my friends is that I've always
wanted to be in a cult. Nice easy sense of belonging and purpose.
But I'm too self-analytical to allow it.
However, your point is well taken. Edited.
> I know it's fun to write down stuff exactly as it sounds in our
> heads, but with Singularity issues, the wrong presentation can
> really damage your credibility... I also think it's important
> that we present the FAI meme in a way that doesn't focus on
> Eliezer so much - even though he originated the idea, FAI-esque
> thinking has been going on for the past decade or two, and its
> present day supporters include people like Nick Bostrom, Brian
> Atkins, etc, not just Eliezer. Placing too much emphasis on
> Eliezer will also make you look like a cult victim.
Show me someone else that's actually got a coherent general AI
I have no attachment to Eliezer himself, but I honestly don't know
anyone else that's doing what he's doing.
> 7. "Please understand that if someone gets to strong nanotech
> before everyone else, they rule the world. This is not a subject
> for debate, you can't fight back, there is no passing Go or
> collecting two hundred dollars." is put very clearly, and
> concisely, and correctly. A little skimp on the explanations
> again, but I suppose that if people seriously question you here,
> they aren't likely to understand the issues surrounding FAI
That's kind of where I'm at, yeah. And as I said elsewhere, this is
really directed at SL1 and 2.
> 8. It could be nitpicking, but near the end of the essay, I would
> personally say we're working towards a "successful" or
> "benevolent" Singularity, rather than a "sysop scenario". "Sysop
> scenario", sadly, gives people the wrong idea 90% of the time.
Good point. Edited.
> Anyway, congratulations again on writing something. Politics is
> indeed largely irrelevant. This becomes clear around high SL2, as
> a matter of fact. At the very least, politics is something we
> cannot influence unless we pursue high-leverage goals, like
> devoting our lives to politics, or, far better yet, building a
> Friendly AI.
-- Me: http://www.digitalkingdom.org/~rlpowell/ *** I'm a *male* Robin. "Constant neocortex override is the only thing that stops us all from running out and eating all the cookies." -- Eliezer Yudkowsky http://www.lojban.org/ *** .i cimo'o prali .ui
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT