From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 22 2002 - 05:45:59 MDT
Eugen Leitl wrote:
> On Sat, 22 Jun 2002, Eliezer S. Yudkowsky wrote:
>
>> I do not consider a soft Singularity to be any less scary than a hard
>> Singularity. I think this is wishful thinking. A Singularity is a
>
> I disagree. There's a qualitative difference if changes ordinarily
> occuring during a decade are happening within a second. We can handle a
> lot of change during a decade. Lumping that change in a single second
> completely overloads our capability to adapt. Considerable leverage is
> available to people to inhibit the kinetics of early stages via legacy
> methods.
Name EVEN ONE method.
> No such leverage is available for later stages. This is our window of
> operation to reduce our vulnerabilities by addressing some of our key
> limitations.
How? I've been asking you this for the last few years and have yet to
receive a straight answer. Pardon me; I got a straight answer over IRC
once, which you later disclaimed as soon as I mentioned it.
>> Singularity; the Singularity doesn't come in a "soft" version that lets
>> you go on a few dates before deciding on a commitment. That option
>> might be open to individual humans (or not), but it is not a real
>> option for humanity.
>
> You're arguing that you can influence the onset, but not the general
> shape of the Singularity. I have to disagree on the latter, since the
> foothills (defined by presence of basically unmodified people) can be
> obviously engineered.
How?
>> I would call it dead certain in favor of a hard takeoff, unless all the
>> intelligences at the core of that hard takeoff unanimously decide
>> otherwise.
>
> Wonders have been known to happen.
What concrete reason do you have for expecting a "wonder" in this case?
>> All economic, computational, and, as far as I can tell, moral
>> indicators
>
> My moral indicator might be broken, but I don't see how activities
> involving a very real probablity of complete extinction of all biological
> life on this planet can be called moral.
I guess that makes "human intelligence" immoral, then, because I don't know
of any path into the future that involves zero existential risk.
>> fall through to much faster substrate than our 200Hz neurons. The
>> closest we might come to a slow Singularity is if the first transhumans
>> are pure biological humans, in which case it might take a few years for
>> them to build AI, brain-computer interfaces, or computer-mediated
>> broadband telepathy with 64-node clustered humans, but my guess is that
>> the first transhumans would head for more powerful Singularity
>> technologies straight out of the gate.
>
> I should hope not. It would seem to be much more ethical to offer
> assistance to those yet unmodified to get onboard, while you're still
> encrusted with the nicer human artifacts and the player delta has not yet
> grown sufficiently large that empathy gets eroded into indifference.
You know, maybe I shouldn't mention this, since you'll probably choose to
respond to it instead of my repeated questions for any concrete way of
producing a soft Singularity; but if you believe that all altruism is
irrational, why do you claim to be currently altruistic? Do you see
yourself as having chosen altruism "over" rationality as the result of your
"legacy" empathy? I can't see trusting someone who sees the inside of their
mind that way.
> We don't have to shovel the inhibition agent by the ton all the time.
> Later stages can't and shouldn't be controlled.
You have yet to give even a single reason why we should think earlier stages
are controllable. What is an "inhibition agent" and how does it differ from
magical fairy dust?
>> very little point in trying.
>
> Once again we disagree. There's tremendous point in trying if human lifes
> are at stake.
They are at stake. The slower the Singularity, the more die in the
meanwhile; and all known proposals for attempting to deliberately slow the
Singularity increase total existential risk. (I'm not talking about your
proposals, since you have to yet to make any, but rather the proposals of
Bill Joy and the like.)
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT