Re: How hard a Singularity?

From: Samantha Atkins (samantha@objectent.com)
Date: Sat Jun 22 2002 - 22:23:11 MDT


Eliezer S. Yudkowsky wrote:
> Smigrodzki, Rafal wrote:
> > Eliezer S. Yudkowsky wrote:
> >
> > Vernor Vinge's original stories of
> >> the Singularity were based around the blazing disruptive power of
> >> smarter-than-human intelligence
> >
> > ### You build your business plan on the idea of hard-takeoff
> Singularity.
> > I agree this is a prudent decision - the "slow" variant much less
> > dangerous, and would involve a huge number of contributors, with
> > potentially less opportunity for a small group to make a difference. It
> > is reasonable to concentrate on the most scary scenario, even if it
> isn't
> > the most likely one.
>
> I do not consider a soft Singularity to be any less scary than a hard
> Singularity. I think this is wishful thinking. A Singularity is a
> Singularity; the Singularity doesn't come in a "soft" version that lets you
> go on a few dates before deciding on a commitment. That option might be
> open to individual humans (or not), but it is not a real option for
> humanity.
>
> > While I also believe that the fast Singularity is a distinct
> possibility,
> > I don't have an intuition which would help me decide which variant is
> > more likely and by what odds.
> >
> > What is your intuition? Is it a toss-up, a good hunch, or (almost)dead
> > certainty?
>
> I would call it dead certain in favor of a hard takeoff, unless all the
> intelligences at the core of that hard takeoff unanimously decide
> otherwise.
> All economic, computational, and, as far as I can tell, moral indicators
> point straight toward a hard takeoff. The Singularity involves an inherent

I am curious what the moral indicators are you speak of. It is
not at all certain that fewer deaths will occur during a
Singularity or soon thereafter is that is a primary moral driver.

> positive feedback loop; smart minds produce smarter minds which produce
> still smarter minds and so on. Furthermore, thought itself is likely to

Once we get AI designing other AI or transhumans improving their
intelligence and systems this seems true enough within whatever
limits might adhere. Given this a Singularity seems inevitable,
so the question of what kinds of ethics the transhumanists, soon
to be trans- and post-humans and their creations have, will have
and follow. At least this is a crucial question of one's
morality include preventing senseless death and suffering of
sentients.

> fall through to much faster substrate than our 200Hz neurons. The closest
> we might come to a slow Singularity is if the first transhumans are pure
> biological humans, in which case it might take a few years for them to
> build
> AI, brain-computer interfaces, or computer-mediated broadband telepathy
> with
> 64-node clustered humans, but my guess is that the first transhumans would
> head for more powerful Singularity technologies straight out of the gate.
> Beyond that point it would be a hard takeoff. I see no moral reason for
> slowing this down while people are dying.

The moral reason is if the hard[er] takeoff results in vastly
more people dying, even the entire race ending forever. At this
point it is a real krap-shoot whether the Singularity
intelligences will themselves be stable and not fall immediately
into a super-war or some other malady. At this point, my
primary allegiance has to be with the existing sentients.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT