RE: Scenario for early hard takeoff

From: Doug Sharp (
Date: Tue Dec 11 2007 - 18:54:21 MST

Sorry, wrong reply.

-----Original Message-----
From: [] On Behalf Of Doug Sharp
Sent: Tuesday, December 11, 2007 7:14 PM
Subject: RE: Scenario for early hard takeoff

Copy and paste the URL into the address bar of your web browser:

See if that works.

-----Original Message-----
From: [] On Behalf Of Matt Mahoney
Sent: Friday, August 31, 2007 12:03 PM
To: sl4
Subject: Scenario for early hard takeoff

I would like to hear your opinions on the threat of early, hard takeoff
following an evolutionary approach to AGI. As you know, there are many
working independently toward this goal and not everyone is designing for
friendliness. In fact, many don't even consider it to be a problem. They
just trying to get their systems to work.

A singularity is launched when computers have better than human level
intelligence, because if humans can create such machines, then those
can do likewise, and faster. But how do know when a machine is smarter than
you? My computer has an IQ of 10^12 in arithmetic and 0.001 in art

I argue that the relevant measure of intelligence is the ability to
recursively improve itself. We know that an agent cannot predict the output
of another agent of greater algorithmic complexity. Therefore recursive
improvement (RSI) necessarily requires an experimental approach. The parent
does not know which mutations will result in a more successful child.

I think you can see that AGI will likely take an evolutionary approach.
Evolution favors intelligences that are the most successful at reproduction.

There are two ways to accomplish this.

1. Good heuristics, e.g. ability to guess which modifications are likely to

2. Access to resources, e.g. CPU cycles, memory, network bandwidth, which
limit the rate at which experiments can be performed (i.e. the population
size). Brute force makes up for bad heuristics.

Currently, humans have better heuristics than machines. Random bit flips to
program are very unlikely to yield improvements with respect to any
goal. Software is on the chaotic side of Kauffman's threshold for
balanced systems. Any small change in the code results in a large change in
behavior. Software engineers have many techniques for reducing the
interdependency of programs to bring them closer to a Lyapunov exponent of
local variables, functions, classes, packages, libraries, and standard
protocols. In addition, software development, debugging, testing, and
engineering require a lot of human level knowledge: understanding how users
and programmers think, familiarity with similar software, and the ability to
read vague and incomplete specifications in natural language and fill in the
blanks with reasonable assumptions. A hacker reverse engineering a network
client can spot an icon image embedded in an executable and guess what
when a user clicks on it. A machine does not have this advantage.

The second problem is to acquire resources. This could happen 3 ways.

1. They could be bought, e.g. Google, Blue Gene/L.
2. They could be begged, e.g. GIMPS, SETI@Home.
3. They could be stolen, e.g. the 1988 Morris worm, Code Red, SQL Slammer.

Worms are primitive organisms. They reproduce rapidly, taking over a large
portion of the Internet in minutes or hours, but they can't usefully mutate.

Once the environment adapts by closing the security holes they exploited,
become extinct.

A deeper reason is that the worms ultimately failed because they could only
acquire a small portion of the available resources. The Internet is not
a network of a billion computers. It is a network of a billion computers
a billion human brains. But what will happen when most of the computing
resources shift to silicon?

An intelligent worm that understands software is my nightmare. Every day
Microsoft issues security patches. So does just about every major software
developer. I know that there are thousands of vulnerabilities on my
right now. Usually it is not a problem because if nobody knows about them,
then nobody can exploit them, and once they are exploited, they are
and the software is patched. The window of vulnerability is small.

But what happens when an AGI can analyze and reverse engineer software, then
launch an attack that exploits thousands of vulnerabilities at once?

Here are just a few examples.

Every few days when I turn off my computer, Windows installs an automatic
update. What happens if:
1. is hacked, and my PC downloads a trojan.
2. The DNS server of my ISP was hacked, and returned a bogus IP address for directing me to a lookalike site.
3. A router between my PC and the server was hacked, and inserts packets
containing trojan code.
4. My neighbor's PC was hacked, listens to traffic on the cable modem (which
is a broadcast medium) and injects packets at just the right time.

Linux is not immune. When I boot up Ubuntu I often get a notification that
updates are available and I happily type in my root password so it can
a new kernel. All sorts of programs now have automatic updates.

I must stress that fixing these vulnerabilities does not solve the problem,
because there are thousands more, and an intelligent machine is going to
them first. It will also be far more clever. Undoubtedly it has hacked
Yahoo or a router somewhere and read my email, so it knows that I test data
compression programs on my PC. I get an email with a return address from
someone I know that works in data compression development that a new version
is available. My virus detector has never seen the program before.

Just as the most successful parasites do not kill their hosts, successful
worms will remain hidden. Your computer will seem to work normally. But
really, what could you do? You google for a test and download a patch, with
the worm watching your every move? Wipe your disk and reinstall the OS from
CD, and then connect to an Internet where every computer is infected? Or do
you just accept that you can't trust your computer, and just live with it?

-- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT