From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Dec 31 2003 - 12:06:41 MST
On Wed, Dec 31, 2003 at 11:24:19AM +0000, Mitchell Porter wrote:
>
> >From the essay:
>
> >a Friendly AI must be the first being to develop strong nanotech
> >on Earth, or one of the first, or we are all going to die in a
> >mass of grey goo
>
> Something about this strikes me as wrong-headed. Since grey goo is
> considerably closer in time than Friendly AI, an imperative like
> this will lead people to focus on stopping "strong nanotech",
> rather than creating Friendly AI.
If they read that paragraph all by itself, maybe.
I don't feel a lot of responsibility for people's inability to read
for content.
> I don't think we have the luxury of assuming that "Friendly AI
> before strong nanotech" is very likely.
I went to great lengths to make it clear that I don't think it's
very likely at all. That's why it needs working towards.
> The advent of nanotechnology does not lead immediately to
> extinction,
I'm not convinced of that.
> Also: "strong nanotechnology" has in the past denoted any form of
> molecular manufacturing, whether feedstock-driven or
> "free-living", with "weak nanotechnology" being all the other
> nanoscale stuff. I think it's one of umpteen terms that Foresight
> has invented in order to distinguish the assembler vision from
> other activities claiming the name of nanotechnology.
Ah. Do you have other terminology to offer in its place?
"non-self-replicating nanotech" is a bit of a mothful. 8)
-Robin
-- Me: http://www.digitalkingdom.org/~rlpowell/ *** I'm a *male* Robin. "Constant neocortex override is the only thing that stops us all from running out and eating all the cookies." -- Eliezer Yudkowsky http://www.lojban.org/ *** .i cimo'o prali .ui
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT