RE: How hard a Singularity?

From: Smigrodzki, Rafal (
Date: Mon Jun 24 2002 - 16:12:42 MDT

I think I failed to send the following message on Sunday but in case you
have already seen it, please accept my apologies.


Eliezer S. Yudkowsky [] wrote:

I do not consider a soft Singularity to be any less scary than a hard
Singularity. I think this is wishful thinking.

### I think a slow Singularity would be inherently safer, if it were to
occur (you *can* send troops, dismantle the internet, use nuclear weapons,
etc. to stop it if you think it's goin bad). However, I do agree with you
that it is most likely wishful thinking that the "natural" way for
superintelligent self-enhancement will be slow.

Once a human-level AI exists, it can be copied and will work 24/7 on
self-enhancement - and it should be able to squeeze a lot more performance
out of existing hardware simply by virtue of its inhuman persistence and
cooperation among copies equivalent to a large team of top-flight
programmers. This should be enough to start a positive feedback.


I would call it dead certain in favor of a hard takeoff, unless all the
intelligences at the core of that hard takeoff unanimously decide otherwise.

### I do not share this certainty, but then you are the better informed
person. However, I'd like to posit the following objection - what if there
are some natural-law-type limits to the total problem-solving ability that
can be controlled by a single self-aware unit? There are limits to the size
of dinosaur, not apparent when you are building a mouse. The SI *could* find
itself limited by the sheer complexity of the processing needed to produce
aditional growth. BTW, I find it quite useless to speak of AI's 1000 or 1000
000 times smarter than humans, as some writers like to do, without a clearly
defined metric of what it means. We shouldn't forget that even among humans
with our relatively small differences in intelligence it is quite difficult
to provide standards, except for statistically based tools like the IQ.

Another foible is believing that decisions humans make today regarding SAI
will substantionally shape reality over the next couple billion years. Most
likely the only long-term results will be similar in importance to the
cosmic microwave background - small ripples on the surface of reality. The
laws inherent in the evolution of intelligence will provide the bulk of the
sructure, no matter what we do.

Another objection is that an human-level AI alone would be hugely effective,
but still easily controllable by humans - it might be able to work out the
basics of uploading without being an independent moral philosopher. I
understand that there would be some risks - a human-level AI with millions
of copies running in computers all over the world (as would be needed to
solve our problems) would be an SAI just waiting to happen. To reliably
prevent the copies from being modified by hackers and started on the
self-enhancement pathway might be difficult if not impossible. This is why,
all in all, I tend to agree with your approach - trying to get over the
human-level AI ASAP, while preserving friendliness not corruptible by human
influences. I am not dead certain though, it's more like a good hunch with


I would say that the Singularity "wants" to be hard; the difficulty of
keeping it soft would increase asymptotically the farther you got, and I see
very little point in trying.

### Yes, I agree, but not quite - the difficulty of keeping the Singularity
slow would be enormous right at the beginning, at the level of human-level
AI (as I wrote above). Once you get to the level where humans have no direct
input, you might, paradoxically, be able to slow down - a Friendly AI might
realize that further growth in certain directions will damage Friendliness,
or the FAI encounters fundamental moral dilemmas which can be only addressed
by superhuman intelligences with preservation of the principle of autonomy,
so that the AI is morally constrained from making the decisions for us. The
FAI would then stop its enhancement, until enough humans could reach the
AI's level and provide guidance. Since malicious hackers would not be able
to change the AI's mind by direct meddling with code, such a decision have
the force of the law (and the FAI would stop any independent AI work, at all
levels). If the slow Singularity is better for us, your FAI will tell us

Indeed, one of the crucial elements of stable Friendliness will be the
ability to predict moral challenges in advance, and adjust development
accordingly. I think you did address this issue in the CFAI.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT