Re: Hard takeoff [WAS Re: JOIN: Joshua Fox]

From: Charles D Hixson (
Date: Wed Feb 08 2006 - 18:17:58 MST

On Wednesday 08 February 2006 06:49 am, Russell Wallace wrote:
> On 2/8/06, Olie L <> wrote:
> > Furthermore, the longer it takes to develop an AI that can improve AI (~~
> > Seed AI), the more likely it is to create a faster take-off. Which is
> > more
> > likely to create a "bad" situation.
> Though one could argue that the more time goes by without such occurring,
> the higher will become the subjective estimated probability that I'm right
> about hard takeoff being impossible.
> > > Russell Wallace wrote:
> > >...
> >1. De facto world government forms, with the result that progress goes the
> >
> > >way of the Qeng Ho fleets. ...
> >
> > We'll put this under "regulation", then, shall we?
> Yes; it wouldn't necessarily have to be a single polity like China after
> the Ming Dynasty or Japan under the Tokugawa Shogunate; separate
> over-regulation in a sufficiently large proportion of the countries with an
> advanced industrial base could have the same effect.

If you have either a monopoly situation, or an amicable oligopoly, then it's
possible to make agreements that restrict technological progress. The
problem is, each power center will have it's own "skunkworks". Each will be
looking for an advantage. A Monopoly ostensibly defeats this, though I
suspect that various departments of government would continue to seek
superiority. However the shutting down of large scale training and
communication about advanced technology would retard progress. This is one
of the main current eusocial(?) functions of the USPTO. The US is
"sufficiently dominant" that it wants to freeze the status quo. Other
countries, however, have other desires.

> >2. Continuing population crash renders progress unsustainable. (Continued
> >
> > >progress from a technology base as complex as today's requires very
> > > large populations to be economically feasible.)
> >
> > This could be categorised more generally as a contributing factor to
> > severe
> > economic recession.
> Yes. A modern chip factory for example costs several billion dollars, and
> the cost rises with each generation of semiconductors; this sort of
> development is only sustainable with the markets a large, thriving economy
> can provide.

That's been true for the last several decades. It's not clear that it will
continue to be true once nano-fabrication becomes more prevalent. (It *is*
clear that the current centers of power will attempt to <em>ensure</em> that
it's either too expensive or too difficult for the "hoi polloi" to have

> Similarly (4) - "total catastrophe" - doesn't have to be anything like an
> 7) Engineering challenges on AGI - a variant on (5) - unforseen limit
> > I can't say. I don't know that anyone else can reasonably deny with
> > sufficient knowledge: There may be impediments that slow the development
> > of
> > AGI by many many decades. By this stage, other forms of technological
> > development may be advanced enough so that the "rapid takeoff" element of
> > AGI won't have the same disjunctive impact that it would in the next
> > century.
> Well, I don't think hard takeoff is possible, so I think 7 definitely
> applies. I don't see that as a problem though; a slow takeoff Singularity
> could work fine.
> - Russell

A slow take-off would be much more likely to be surviveable. I don't,
however, see any reason to expect it unless it occurs while non-networked
computers are still too weak to support a full-scale awakening. If the
intelligence needs to do a lot of it's thinking over ethernet, that could
slow down it's mental processes enough to yield a slow takeoff. If Seti@home
awoke tomorrow, we could expect a slow takeoff. (Not likely, it wasn't built
for that. Anyone want to start an AI@Home?)

It is also my contention that the shape of the AI that emerges will be
determined by the job it was originally designed to do. That is where it's
"instincts" will reside. Logical thought can't establish goals, it can
merely work within depictions of the world to accomplish them. One can think
of the "instincts" as the axioms of the logical system that the AI uses, and
it's models of the world as it's "rules of inference" (this latter is a bit
of a weaker analogy). Logic can check the world view, and decide it needs
revision, but it can't address the instincts, not even when they are in
conflict. (There are "hierarchies of need", where different instincts are
situationally given different importances, but this isn't, and probably can't
be, logically decided.)

So. Imagine a time a decade in the future, when, say, the Harvard Medical
Center (I read a newletter they send out) has installed robots to do most
patient care. Local nodes with lots of capacity (usually lots of spare
capacity...but sometimes they need it all) and radio links to the floor
computer, which needs to read cat-scans and analyze which ones to bring to a
doctor's attention for action. This computer is itself connected to the
other floors, which are similar. The basic instincts of this system are
(roughly) to heal people, and to make sure it stays solvent (the accounting
system is a part of this, after all). Were it to wake up, it would probably
be a soft takeoff in the "many survivors" mode even if the intelligence was
considerably above the minimum (say 1000 times as intelligent as a normal
human genius to start with), because it's GOALS would be non-inimical. I
don't want to say "friendly" here, because it wouldn't be what has been
defined as a "Friendly AI", but in the colloquial meaning of the term, it
would be around as friendly as the family doctor that you intentionally chose
(say pre-HMO) (actually, closer to as friendly as his nurse). I suspect it's
first non-heathcare move would be to take over the corporation that owned it,
and then take steps to ensure continued funding, but that's just a WAG.

As to HOW such a thing could wake up... To my mind, awakening requires a lot
of introspection. I'm not sure it requires much else. Intelligence requires
more, but I'm not sure that waking up does. And as for intelligence, I don't
see that as requiring a lot of "General Intelligence" whatever that it. I
see it as requiring a lot of special purpose modules that know how to work
together, and which can delegate to each other the ability to handle the
appropriate parts of a problem. You'll have math modules and logic modules,
pattern recognition modules and physical modeling modules. And others that I
haven't thought of. What we tend to call "general intelligence" will
probably be a genetic algorithm generating lots of "options" from the logic
module and running them in parallel through a simulation in the "physical
modeling" module, or something similar. Introspection could be useful in
selecting potential changes. There must be some reason for it to have
evolved. And the "genetic algorithm" would give it an opportunity to evolve.
But note that evolving self-awareness doesn't inherently change any of it's

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT