RE: Scenario for early hard takeoff

From: Doug Sharp (dougsharp@channelzilch.com)
Date: Tue Dec 11 2007 - 18:13:42 MST


Copy and paste the URL into the address bar of your web browser:
http://chipwits.com/elves/elvesbuild.zip

See if that works.

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Matt Mahoney
Sent: Friday, August 31, 2007 12:03 PM
To: sl4
Subject: Scenario for early hard takeoff

I would like to hear your opinions on the threat of early, hard takeoff
following an evolutionary approach to AGI. As you know, there are many
groups
working independently toward this goal and not everyone is designing for
friendliness. In fact, many don't even consider it to be a problem. They
are
just trying to get their systems to work.

A singularity is launched when computers have better than human level
intelligence, because if humans can create such machines, then those
machines
can do likewise, and faster. But how do know when a machine is smarter than
you? My computer has an IQ of 10^12 in arithmetic and 0.001 in art
appreciation.

I argue that the relevant measure of intelligence is the ability to
recursively improve itself. We know that an agent cannot predict the output
of another agent of greater algorithmic complexity. Therefore recursive
self
improvement (RSI) necessarily requires an experimental approach. The parent
does not know which mutations will result in a more successful child.

I think you can see that AGI will likely take an evolutionary approach.
Evolution favors intelligences that are the most successful at reproduction.

There are two ways to accomplish this.

1. Good heuristics, e.g. ability to guess which modifications are likely to
be
beneficial.

2. Access to resources, e.g. CPU cycles, memory, network bandwidth, which
limit the rate at which experiments can be performed (i.e. the population
size). Brute force makes up for bad heuristics.

Currently, humans have better heuristics than machines. Random bit flips to
a
program are very unlikely to yield improvements with respect to any
meaningful
goal. Software is on the chaotic side of Kauffman's threshold for
critically
balanced systems. Any small change in the code results in a large change in
behavior. Software engineers have many techniques for reducing the
interdependency of programs to bring them closer to a Lyapunov exponent of
0:
local variables, functions, classes, packages, libraries, and standard
protocols. In addition, software development, debugging, testing, and
reverse
engineering require a lot of human level knowledge: understanding how users
and programmers think, familiarity with similar software, and the ability to
read vague and incomplete specifications in natural language and fill in the
blanks with reasonable assumptions. A hacker reverse engineering a network
client can spot an icon image embedded in an executable and guess what
happens
when a user clicks on it. A machine does not have this advantage.

The second problem is to acquire resources. This could happen 3 ways.

1. They could be bought, e.g. Google, Blue Gene/L.
2. They could be begged, e.g. GIMPS, SETI@Home.
3. They could be stolen, e.g. the 1988 Morris worm, Code Red, SQL Slammer.

Worms are primitive organisms. They reproduce rapidly, taking over a large
portion of the Internet in minutes or hours, but they can't usefully mutate.

Once the environment adapts by closing the security holes they exploited,
they
become extinct.

A deeper reason is that the worms ultimately failed because they could only
acquire a small portion of the available resources. The Internet is not
just
a network of a billion computers. It is a network of a billion computers
and
a billion human brains. But what will happen when most of the computing
resources shift to silicon?

An intelligent worm that understands software is my nightmare. Every day
Microsoft issues security patches. So does just about every major software
developer. I know that there are thousands of vulnerabilities on my
computer
right now. Usually it is not a problem because if nobody knows about them,
then nobody can exploit them, and once they are exploited, they are
discovered
and the software is patched. The window of vulnerability is small.

But what happens when an AGI can analyze and reverse engineer software, then
launch an attack that exploits thousands of vulnerabilities at once?

Here are just a few examples.

Every few days when I turn off my computer, Windows installs an automatic
update. What happens if:
1. windowsupdate.microsoft.com is hacked, and my PC downloads a trojan.
2. The DNS server of my ISP was hacked, and returned a bogus IP address for
windowsupdate.microsoft.com directing me to a lookalike site.
3. A router between my PC and the server was hacked, and inserts packets
containing trojan code.
4. My neighbor's PC was hacked, listens to traffic on the cable modem (which
is a broadcast medium) and injects packets at just the right time.

Linux is not immune. When I boot up Ubuntu I often get a notification that
updates are available and I happily type in my root password so it can
install
a new kernel. All sorts of programs now have automatic updates.

I must stress that fixing these vulnerabilities does not solve the problem,
because there are thousands more, and an intelligent machine is going to
find
them first. It will also be far more clever. Undoubtedly it has hacked
into
Yahoo or a router somewhere and read my email, so it knows that I test data
compression programs on my PC. I get an email with a return address from
someone I know that works in data compression development that a new version
is available. My virus detector has never seen the program before.

Just as the most successful parasites do not kill their hosts, successful
worms will remain hidden. Your computer will seem to work normally. But
really, what could you do? You google for a test and download a patch, with
the worm watching your every move? Wipe your disk and reinstall the OS from
CD, and then connect to an Internet where every computer is infected? Or do
you just accept that you can't trust your computer, and just live with it?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT