Re: An essay I just wrote on the Singularity.

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Dec 31 2003 - 12:25:07 MST


On Wed, Dec 31, 2003 at 06:46:11AM -0800, Tommy McCabe wrote:
> I have several objections to the points raised in this essay. You
> say that the Singularity is a bad term for these reasons:
>
> "We can't see what lies beyond twenty minutes from now regardless.
> Just ask a stock broker."
>
> We can't predict the future down to the letter, true, but I'd be
> willing to bet in twenty years, let alone twenty minutes, the
> planet called Earth will be here, the stockbroker will be either
> still human, or dead, and there will still be a stock market. If
> the Singularity happened, you couldn't guarantee any of those.

Granted. I put that in there at least partly because every single
one of my friends has brought up that objection. Enough people have
pointed it out that I've stopped wanting to have that argument, or
even say anything that might start it.

> "This has next to nothing to do with the mathematical term
> 'singularity' it was derived from, as that implies a total
> cessation of the graph beyond a certain point, which isn't the
> kind of implication I want to have made about humanity's future!"
>
> Total cessation? Yes, on a hyperbolic graph, at a certain point,
> the function will be undefined.

That's the definition of a mathematical singularity: the
undefined point of a function.

> However, 1), actual values are not constrained to follow
> mathematical equations, and 2), that signifies something unknown
> happens. Maybe it stops, maybe it slows down to exponential
> growth, or even linear, maybe it somehow will go to infinity.
> We're currently too dumb to know.

Yes, I completely agree. That's why the term singularity, in its
mathematical sense, doesn't fit.

> "It fails, IMO, to address the important point: the coming change,
> whatever you call it, will not be created by technology in any
> respect (technological growth, new technologies, what have you).
> It will happen if, and only if, greater than human intelligence
> becomes part of humanity's reality. Then, and only then, will we
> truly have reached a point where things have fundamentally changed
> because then we're dealing with minds creating technology that are
> fundamentally alien to us, or at least better. At that point,
> things have truly changed."
>
> Technology is defined as "The application of science to practical
> problems", or something like that. Therefore, any
> superintelligence we create must be technology, or at least the
> product of technology. An infrahuman AI is a mind that is
> fundamentally alien to us, but it's not going to outperform a
> human on at least some things (by definition). The bridge here is
> at the point where technology creates something smarter than us,
> which is unlike other technology, because it could think of things
> no human could ever possibly think of.

I agree with all of that. In fact, I think we're violently agreeing
on that point. I simply feel that people focus on the wrong thing
when talking about the singularity: growth of technology, instead of
growth of intelligence.

> As a matter of fact, my only complaint against the term
> 'Singularity' has been that people tend to see it as a Bible-type
> doomsday.

That only mildly annoys me, but I'll address that in another
response.

> Quote from the essay: " speaking of the (still very theoretical)
> possibility of human-level generalized intelligence." Even though
> a human-level AI is extermely likely to become transhuman in about
> 15 seconds, human-level AI isn't the ultimate goal. Transhuman AI
> is.

Granted. Is there a way I could have made that more clear? (I just
realized I didn't use the term 'transhuman' at all; that's
unfortunate).

> Next point: Even though transhuman AIs can (and should) have the
> capability to understand emotions, having emotions built into the
> AI is not a good idea by any means; you could end up with critical
> failure scenarios #'s 9, 12, 19, 21, and probably some others.

Yeah; I had a *really* tough time making that distinction.
Suggestions welcome, but bear in mind this essay is aimed at SLs 1
and 2, if that.

> I agree with most of the things said about nanotech, except for
> the last one: "The big difference with nanotech versus nukes is
> that there is a first strike advantage. A huge one." The first
> strike advantage in this case, is probably going to be being
> responsible for the destruction of the planet.

Well, yes, probably. But if you *think* you can get to your enemies
without destroying yourself (say, because your nanobots only trigger
on certain genetic markers), it *looks* like there's a first strike
advantage. I'll re-phrase it that way, thanks.

> A Friendly AI doesn't have the supergoal of being nice to humans;
> it has the supergoal of acting friendly toward other sentients in
> general. A Friendly AI that is Friendly with humans shouldn't try
> to blow the same humans to smithereens the minute they upload.

All that's required there is for the AI to still recognize them as
human, which hardly seems a stretch for general intelligence. I
wouldn't necessarily want an FAI to be friendly to any aliens that
came along. Not *necessarily*; it might be the right idea, it might
not, but I'd like the FAI to have the mental option of deciding,
"Umm, these aliens are fundamentally unfriendly to humans, and I
can't fix that without re-writing their brains, so I better defend
humanity (and myself) from them".

> I agree with the assumption that transhuman AI is possible,
> however, creating copies of yourself and then modifying the copies
> is not only an inefficient use of computing power, but any AI
> smart enough to have reached the stage of reprogramming ver own
> source code is likely to have the power to destroy the planet, or
> will have it very soon, and thus modelling the AI in it's full
> runs the risk of the model, or the copy, or whatever you want to
> call it, destroying the planet.

There are many easy ways to get around this, but I was actually
referring to uploads at that point.

> Recursive self-improvement is a vastly more powerful version of
> design-and-test; the testing part for any AI with the capability
> to blow up the planet should use the partial AI shadowself
> described in CFAI: Wisdom tournaments.

<nod> Absolutely. But this is a non-technical essay.

> I also agree that strong nanotech is possible.
>
> The last assumption, namely, that A Friendly, Superintelligent AI
> Gets To Nanotech First shouldn't be taken as an assumption at all.
> It's what I'd like to happen, certainly, but there's no money-back
> guarantee on it. Nanotech is already far more advanced than AI
> (see MistakesOfClassicalAI on the Wiki).

I'm sorry, which wiki?

Anyways, it was stated as an assumption because the goal of the
essay is to present the Sysop Scenario to SL1 level friends and
family of mine. The sysop scenario can't happen without that
assumed event.

> I agree that getting nanotech when you are a transhuman is likely
> to be easy, but note that you don't even need to convince the
> programmers to hook up nanotech tools. A superintelligent AI could
> simply copy itself, send the copy over the Internet, and take over
> all the computers in the most advanced nanotech lab on the planet.

You know, I swear I never thought of that.

I'm not adding it to the essay, either; it was a very fine line to
both present the idea of "This being is amazingly powerful" and not
inspire absolute terror at the idea of such a being existing.

> I also think that even having to 'convince' the programmers is
> overkill; if the AI is Friendly, the programmers should just give
> it the necessary tools; no tricking required.

Of course, and I think I pointed that out.

> I also agree that the first one to nanotech pretty much has
> ultimate control of the planet.
>
> Ruling The World is a very, very bad term for it; it confuses the
> possession of absolute physical power with the exercise of
> absolute social power. The former is probably wanted; the latter
> isn't, and we're dealing with a being without a tendency to abuse
> power. I agree that a superintelligence in certainly capable of
> the examples given, however, that's probably just the beginning of
> the list of stuff a superintelligence would be capable of.

Of course. If you have a suggestion for a better section header,
I'm all ears.

-Robin

-- 
Me: http://www.digitalkingdom.org/~rlpowell/  ***   I'm a *male* Robin.
"Constant neocortex override is the only thing that stops us all
from running out and eating all the cookies."  -- Eliezer Yudkowsky
http://www.lojban.org/             ***              .i cimo'o prali .ui


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT