Re: How hard a Singularity?

From: Eugen Leitl (eugen@leitl.org)
Date: Sat Jun 22 2002 - 07:13:34 MDT


On Sat, 22 Jun 2002, Eliezer S. Yudkowsky wrote:

> > Considerable leverage is available to people to inhibit the kinetics
> > of early stages via legacy methods.
>
> Name EVEN ONE method.

Isn't it quite obvious? Laws and their enforcement.

If you're working with radioactive materials, especially fissibles, nerve
agents, pathogens, recombinant DNA you're subject to them. I distinctly
hope that anything involving molecular self-replication in free
environment and ~human level naturally intelligent systems will see heavy
regulation, at least initially.

Maybe your AI will turn out to be all love and light. Then maybe not.

> > No such leverage is available for later stages. This is our window of
> > operation to reduce our vulnerabilities by addressing some of our key
> > limitations.
>
> How? I've been asking you this for the last few years and have yet to
> receive a straight answer. Pardon me; I got a straight answer over IRC

This is hardly accurate. You haven't been asking for the last few years,
and whenever you've asked I answered (unless unable to due to real life
intrusions). Perhaps you're just not listening.

> once, which you later disclaimed as soon as I mentioned it.

Once again, we're talking about avoiding specific developments. The
relevant threat for this forum is runaway superintelligence, a Singularity
turned Blight and death of us all due to side effects.

Don't expect a specific scenario. The more specific it is, the more
irrelevant. What we could do is draft broad guidelines, which necessarily
need to be adaptive in nature.

I told you the inhibiting aspects: regulating, tracking, enforcing. This
is not pretty, nor does it guarantee 100% success, but it's a lot better
than nothing. Talk to your friendly molecular biologist working in Level 3
facilities how they're coping with the regulation load. Remember that the
temporal scope of the regulations is limited, the hard limit being the
late stages of Singularity, which will shrug off any regulations imposed
by previour players due to raising power gradient.

The positive aspects involve a number of issues. As you frequently
mention, people are dying. We need to address this immediately with life
extension and validation of cryonics and deploying this on a large scale.

This is a stopgap measure, the long shot is individual molecular therapies
and medical nanotechnology. This issues so far are about stabilizing life,
the next facet is enhancement.

The Achilles heel of our limited adaptability is biology. We need to
either enhance this substrate, or switch to a new one in a discontinous
process. At this stage of the game it is too early to tell which of these
approaches will prevail. I tended to favour discontinuous migration, but
the complexities of biology make it extremely demanding to model. The
general objective is to become indistinguishable from the enemy, albeit in
a slow, gradual process with minimal amount of dead people on our hands.

However, it is too early to tell. We need to sample as many paths as
possible, given the limit on time and financial resources.

There's more, but I've got something to do this Saturday afternoon.
 
> > I have to disagree on the latter, since the
> > foothills (defined by presence of basically unmodified people) can be
> > obviously engineered.
>
> How?

I've begun to answer this in the above. It would actually cast a good
light on the transhumanist community if we once get off our collective
asses and provide a set of policy guidelines before the likes of Fukuyama
do.
 
> What concrete reason do you have for expecting a "wonder" in this case?

I was being merely sarcastic. I don't expect any wonders.
 
> I guess that makes "human intelligence" immoral, then, because I don't know
> of any path into the future that involves zero existential risk.

Life is uncertain. It doesn't mean we should stroll into a minefield just
because we can.
 
> > I should hope not. It would seem to be much more ethical to offer
> > assistance to those yet unmodified to get onboard, while you're still
> > encrusted with the nicer human artifacts and the player delta has not yet
> > grown sufficiently large that empathy gets eroded into indifference.
>
> You know, maybe I shouldn't mention this, since you'll probably choose to
> respond to it instead of my repeated questions for any concrete way of
> producing a soft Singularity; but if you believe that all altruism is

I don't think the archives show many of your repeated questions which are
not answered. I'd wish you'd stop claiming that. In this post alone you
did that twice.

> irrational, why do you claim to be currently altruistic? Do you see

I claimed no such thing.

What I would claim (though I can present no evidence either way) is that
the human primate definitely shows capacity to act nice towards players
which are unable to meaningfully reciprocate (like helping a trapped bug).
This strikes me as an evolutionary artifact, and irrational (at least I've
missed plausible explanations as to why this is compatible with rationally
selfish behaviour). I like this behaviour.

> yourself as having chosen altruism "over" rationality as the result of your

I haven't chosen much, being raised a human. I don't know what your
definition of altruism is, so I can't say whether I'm an altruist, or not.
It seems that the ROI favours cooperative strategies of agents, if they're
smart and iterative. Latter both should increase considerably within our
lifetimes, thus favouring more benign cooperation strategies.

I tend to adhere to being nice to lesser beings (notice that I worked in
an animal research facility, so clearly there are priorities), even if I'm
not aware of a rational reason to do so.

> "legacy" empathy? I can't see trusting someone who sees the inside of their
> mind that way.

I guess that's only fair, since I don't trust you either with FAI, or
howewer that thing is called today.

Lest anyone misinterpreted what I said: "...while you're still encrusted
with the nicer human artifacts and the player delta has not yet grown
sufficiently large that empathy gets eroded into indifference."

What I'm saying the rationally selfish strategy doesn't seem to favour
agents who engage in symmetric transactions with agents unable to
reciprocate. Since we're running risk of losing empathy with the rest of
humanity when moving away via Lamarckian and Darwinian evolution it is
imperative we make very good use of that empathy while it lasts.

It may of course turn out that there's some uknown higher order ethics at
play here, and sustainability and quality of empathy will be asserted. But
since we don't know for certain we have to play safe.

> You have yet to give even a single reason why we should think earlier
> stages are controllable. What is an "inhibition agent" and how does
> it differ from magical fairy dust?

Fairies typically don't manifest as jackbooted thugs wielding projectile
weapons.

The inhibition agent is a metaphor. Ever seen runaway polymerization? It's
a nucleated runaway (in large volume, due to reaction enthalpy) reaction.
It's actually a good metaphor, because it has a plateau. Inhibitors
terminate nuclei. Concentration of inhibitor influences early stage of
kinetics.
 
> They are at stake. The slower the Singularity, the more die in the
> meanwhile; and all known proposals for attempting to deliberately slow the

We'll just have to agree that our reality model is different. I'd wish
we'd have validated cryonics, there's a considerable unknown lurking in
there about radical life extension approaches available to us today.

> Singularity increase total existential risk. (I'm not talking about your
> proposals, since you have to yet to make any, but rather the proposals of

I wonder whether you've been deleting all my posts on all those lists
we've been on for years. Since you failed to see a proposal (which is
lurking behind about every odd line).

> Bill Joy and the like.)

What's Bill Joy's specific proposals? I haven't run into a concise list
yet.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT