Re: Complexity tells us to maybe not fear UFAI

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Wed Aug 24 2005 - 16:33:27 MDT


> By limiting the computational power available to an AI to be
> one or two orders of magnitude less than that available to a
> human, we can guarantee that it won't outthink us - or, if it
> does, it will do so very, very slowly.

AGIs have /many/ potential advantages over the brain, including
much less pressure to parallelise (with its corresponding
inefficiencies in problems that aren't embarassingly parallel),
reliability (removing the need for redundancy to achieve
accuracy) and much better reconfigurability. Plus to be fair
you'd have to count transistors, or the number of transistors
you'd need to simulate a neuron, not von-neumann ops/second (to
allow for the fact that the brain's hardware is mostly special
purpose).

If you'd said 'nine or ten orders of magnitude', counting raw
FLOPs, this would be more reasonable but not terribly useful.
No one here is deliberately proposing to keep an AGI infrahuman,
and we've already been over how you can't prove that an AGI will
be safe when scaled to superhuman intelligence by performing
black box experiments on an infrahuman precursor. White box
experiments may help if the design is non-opaque and the reseacher
generally knows what they're doing. The problem here, aside from
the general difficultly of designing experiments that will
actually prove the scalable Friendliness of the design to a high
degree of confidence, is that most AGI designs tend to need a lot
more compute power at the start to get things rolling than they
do once reasonably efficient learned behaviours are in place. It's
entirely possible to write a throttleable AI driven by automatic
and/or manual assessment of the rate of progress, but that's
getting into takeoff prevention and layered safety architectures
beyond the (initially) charming simplicity of your original
proposal.

In any case, it's not a reason not to be afraid of UFAI in
general, as regardless of whether you think this is a good
idea there are plenty of people out there who are going to
throw as much computing power as they can get their hands on
at their best-guess AGI architecture.

> but I don't think any algorithm will be found for general
> intelligence that doesn't have the property that exponential
> increases in resources are needed for a linear increase
> of some IQ-like measure.

You could mean either of two things, that it takes exponentially
more resources to /run/ a more intelligent AI, or that it takes
exponentially more resources to /design/ a more intelligent AI.

The first argument certainly doesn't hold for biological
intelligence. It doesn't take an exponentially greater amount
of brain tissue for a human to achieve an IQ score vastly in
excess of a chimps, nor does a chimp have a brain exponentially
larger than that of a wolf. I would say that the scaling is
better than linear, though my point is just that it's a lot
better than exponential. Does it take exponentially
more mass, or more DNA, or more cells to make a higher fitness
organism in general? Does building a supersonic fighter jet
instead of a subsonic one take exponentially more anything?

The answer is no, because until you reach physical limits
performance is a question of organisation rather than raw
resources. The question of the difficultly of design scales
with intelligence isn't so clear cut, and indeed I asked it
myself in my first post to this list. There isn't a simple
answer to this one; I think the factor is a lot less that
exponential, but I don't have a concise argument I can use
to convince you. However I would note that once again
biological evolution sets an opposing precedent; if you plot
the intelligence of the brightest creature on the planet
over time you will have a graph that shows an exponential
/increase/, shooting up dramatically in the last million
years, despite steadily increasing generation times as brains
get bigger. Indeed this graph looks suspiciously like that of
the Singularity theory, extending back into time instead of
forwards.

> If the AI gets out and is able to harness the computational
> power on the internet, that would be different. But within
> its box, it's going to remain at or less than the order of
> magnitude of intelligence dictated by its computational
> capacity.

Better make sure it's an air gap, as human software security
will look pathetic to an AI that understands the causal
structure of computer programs as easily as we understand
navigation in 3 dimensional space.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT