From: Christian Szegedy (firstname.lastname@example.org)
Date: Tue Jun 25 2002 - 01:04:16 MDT
Smigrodzki, Rafal wrote:
> I think I failed to send the following message on Sunday but in case
> you have already seen it, please accept my apologies.
> Eliezer S. Yudkowsky [mailto:email@example.com] wrote:
> I do not consider a soft Singularity to be any less scary than a hard
> Singularity. I think this is wishful thinking.
> ### I think a slow Singularity would be inherently safer, if it were
> to occur (you *can* send troops, dismantle the internet, use nuclear
> weapons, etc. to stop it if you think it's goin bad). However, I do
> agree with you that it is most likely wishful thinking that the
> "natural" way for superintelligent self-enhancement will be slow.
> Once a human-level AI exists, it can be copied and will work 24/7 on
> self-enhancement - and it should be able to squeeze a lot more
> performance out of existing hardware simply by virtue of its inhuman
> persistence and cooperation among copies equivalent to a large team of
> top-flight programmers. This should be enough to start a positive
This is mainly a question of terminology: if singularity is an
exponential process, then it is a "hard take off"
per definitionem, so there is nothing to be argue about. If it is slow,
then it is not "the singularity".
The only question is: how much time does it take from the development of
a superintelligence (more than human
level intelligence) until the singularity. Now I come to some very vague
speculations: I can see the following
1) The SI is friendly: A hard take-off is safer, since the SI would do a
better job on contrilling the singularity
2) The SI is evil: it will want to survive, therefore it pretends to be
friendly in order to not got shut down
prematurely. Since it is an SI we won't recognize it, so we can't
do anything about it.
3) There are several SIs, some of them are evil, some of them are friendly.
Since humans are less intelligent, it is only up to the other
friendly SIs to help turning the singularity
into a "positive" (whatever it is) direction. Since the friendly
SIs benefit from the More law also,
slowing the singularity is not necessary.
Bottom line: A slow singularity (mean: short period of time between the
creation of an SI and
the "real singularity" - accelaration of technology in a superhuman
level) seems not to help. The
only thing which could help humanity is an overweight of friendly ones
among the SIs.
Best regards, Christian
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT