Re: An essay I just wrote on the Singularity.

From: Tommy McCabe (rocketjet314@yahoo.com)
Date: Wed Dec 31 2003 - 07:46:11 MST


I have several objections to the points raised in this
essay. You say that the Singularity is a bad term for
these reasons:

"We can't see what lies beyond twenty minutes from now
regardless. Just ask a stock broker."

We can't predict the future down to the letter, true,
but I'd be willing to bet in twenty years, let alone
twenty minutes, the planet called Earth will be here,
the stockbroker will be either still human, or dead,
and there will still be a stock market. If the
Singularity happened, you couldn't guarantee any of
those.

"This has next to nothing to do with the mathematical
term 'singularity' it was derived from, as that
implies a total cessation of the graph beyond a
certain point, which isn't the kind of implication I
want to have made about humanity's future!"

Total cessation? Yes, on a hyperbolic graph, at a
certain point, the function will be undefined.
However, 1), actual values are not constrained to
follow mathematical equations, and 2), that signifies
something unknown happens. Maybe it stops, maybe it
slows down to exponential growth, or even linear,
maybe it somehow will go to infinity. We're currently
too dumb to know.

"It fails, IMO, to address the important point: the
coming change, whatever you call it, will not be
created by technology in any respect (technological
growth, new technologies, what have you). It will
happen if, and only if, greater than human
intelligence becomes part of humanity's reality. Then,
and only then, will we truly have reached a point
where things have fundamentally changed because then
we're dealing with minds creating technology that are
fundamentally alien to us, or at least better. At that
point, things have truly changed."

Technology is defined as "The application of science
to practical problems", or something like that.
Therefore, any superintelligence we create must be
technology, or at least the product of technology. An
infrahuman AI is a mind that is fundamentally alien to
us, but it's not going to outperform a human on at
least some things (by definition). The bridge here is
at the point where technology creates something
smarter than us, which is unlike other technology,
because it could think of things no human could ever
possibly think of.

As a matter of fact, my only complaint against the
term 'Singularity' has been that people tend to see it
as a Bible-type doomsday.

Quote from the essay: " speaking of the (still very
theoretical) possibility of human-level generalized
intelligence." Even though a human-level AI is
extermely likely to become transhuman in about 15
seconds, human-level AI isn't the ultimate goal.
Transhuman AI is.

Next point: Even though transhuman AIs can (and
should) have the capability to understand emotions,
having emotions built into the AI is not a good idea
by any means; you could end up with critical failure
scenarios #'s 9, 12, 19, 21, and probably some others.

I agree with most of the things said about nanotech,
except for the last one: "The big difference with
nanotech versus nukes is that there is a first strike
advantage. A huge one." The first strike advantage in
this case, is probably going to be being responsible
for the destruction of the planet.

A Friendly AI doesn't have the supergoal of being nice
to humans; it has the supergoal of acting friendly
toward other sentients in general. A Friendly AI that
is Friendly with humans shouldn't try to blow the same
humans to smithereens the minute they upload.

I agree with the assumption that transhuman AI is
possible, however, creating copies of yourself and
then modifying the copies is not only an inefficient
use of computing power, but any AI smart enough to
have reached the stage of reprogramming ver own source
code is likely to have the power to destroy the
planet, or will have it very soon, and thus modelling
the AI in it's full runs the risk of the model, or the
copy, or whatever you want to call it, destroying the
planet. Recursive self-improvement is a vastly more
powerful version of design-and-test; the testing part
for any AI with the capability to blow up the planet
should use the partial AI shadowself described in
CFAI: Wisdom tournaments.

I also agree that strong nanotech is possible.

The last assumption, namely, that A Friendly,
Superintelligent AI Gets To Nanotech First shouldn't
be taken as an assumption at all. It's what I'd like
to happen, certainly, but there's no money-back
guarantee on it. Nanotech is already far more advanced
than AI (see MistakesOfClassicalAI on the Wiki).

I agree that getting nanotech when you are a
transhuman is likely to be easy, but note that you
don't even need to convince the programmers to hook up
nanotech tools. A superintelligent AI could simply
copy itself, send the copy over the Internet, and take
over all the computers in the most advanced nanotech
lab on the planet. I also think that even having to
'convince' the programmers is overkill; if the AI is
Friendly, the programmers should just give it the
necessary tools; no tricking required.

I also agree that the first one to nanotech pretty
much has ultimate control of the planet.

Ruling The World is a very, very bad term for it; it
confuses the possession of absolute physical power
with the exercise of absolute social power. The former
is probably wanted; the latter isn't, and we're
dealing with a being without a tendency to abuse
power. I agree that a superintelligence in certainly
capable of the examples given, however, that's
probably just the beginning of the list of stuff a
superintelligence would be capable of.

I also agree with the idea of the SysOp scenario
presented.

I agree that the chances of that particular scenario
happening isn't that good; however, given Friendly
superintelligence, whatever scenario does play out is
as likely to be just as good as the one given. The
only lottery is the chances of achieving a Friendly
transhuman before we're blown to smithereens. And
those odds we can improve directly. Although I'm still
searching for some way to do so....

__________________________________
Do you Yahoo!?
Find out what made the Top Yahoo! Searches of 2003
http://search.yahoo.com/top2003



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT