Re: AI hardware was 'Singularity Realism'

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Sun Mar 07 2004 - 20:28:42 MST


On Mar 7, 2004, at 5:07 PM, Keith Henson wrote:
> At 10:11 AM 07/03/04 -0800, andrew wrote:
>
>> You cannot substitute time complexity for space complexity, and space
>> complexity generally defines intelligence. You can substitute space
>> complexity for time complexity, but not the other way around.
>
> I can't parse this. It seems likely that you have something here that
> is worth understanding. Can you try again from a bit lower level?

Sure.

Time complexity is just the time it takes a given algorithm to run, in
terms of how it scales. I'm sure you already got that.

A finite machine can be defined in terms of the size of the largest
algorithm that can be expressed on that machine, its intrinsic
Kolmogorov complexity. This roughly maps to "memory" for standard
silicon, hence "space complexity" i.e. the amount of memory
mathematically necessary to run a particular algorithm.

No algorithm with a Kolmogorov complexity greater than a given machine
can be expressed on said machine. To put it another way, a machine
with infinite execution speed is incapable of computing algorithms that
do not fit within the space complexity of the machine. On the other
hand, you can improve time complexity by using the excess space
capacity of a machine to implement faster but less space efficient
algorithms, e.g. using a giant lookup table (O(1)) rather than
computing values mathematically, which could be O(n) or worse. This is
also why quantum computers aren't as important to AI as some people
seem to think they are. They don't let us run an entirely new class of
algorithms, they only let existing algorithms run faster. "Faster"
doesn't mean "smarter" in the abstract, though in practice speed is
admittedly important as well if intelligence is to be useful.

High time complexity makes getting an answer from a machine
intractable; if you wait long enough, you'll get an answer. High space
complexity makes getting an answer from a machine impossible because
the pattern is incapable of being expressed or perceived. Big
qualitative difference, that. The limits of intelligence expressible
on any machine is generally defined by the Kolmogorov complexity of the
machine, since this is also defines the limit of both expression and
inference. A machine will never be able to perceive a high-order
pattern that is more complex than the machine itself.

The technical details are a bit more complicated than this, but the
point about time complexity not being a substitute for space complexity
should be clearer now. Faster machines make some operations faster,
nothing more. Bigger machines (in the memory sense), however, usually
make smarter machines no matter what the speed.

>> I think the brain storage mechanism is far more efficient in an
>> information theoretic sense than I think a lot of people think it is.
>
> I hope you are right, because I don't like the conclusions of Thomas
> Landauer's research. But I really don't see how to refute him.

The number Landauer comes up with for the bits is probably pretty close
to being correct, extrapolating from my own models. There are a couple
key points though:

1.) The kind of representation efficiencies we get in vanilla silicon
is atrocious because of architecture requirements. Storing a few bits
of information in any kind of good associative representational
framework probably averages a hundred bytes of memory spent because of
alignment, non-locality, addressing, etc. We don't get much bang for
our representation buck. However, even with these inefficiencies we
will easily surpass the real capacity of wetware in silicon very soon.

2.) The brain almost certainly gets a much higher compression ratio
than we are used to with normal computers, which affects our perception
as to what is possible in a given amount of space. If you loosen the
constraints of accuracy a bit, you can actually cram a hell of a lot of
information in that space while maintaining high predictive limits
(i.e. maintaining good correctness of memory). Humans don't actually
seem to remember that much, and none of what we remember is axiomatic
or absolute.

> I haven't got the slightest idea of how you map a mathematical
> structure into a physical one or compare them. Perhaps you could
> expand on this and give an example?

The data structures look and behave in a fashion similar to biological
neural networks (for as much as we know about them), yet are
mathematically/algorithmically derived. It could be coincidence, but I
have my doubts. I'm not a neural network aficionado (biological or
otherwise).

> That's an awesome claim. It is of considerable interest to me in a
> business sense because of the badge camera.

It is an expensive and oddly behaving algorithm for embedded systems,
and would be better suited for feature extraction rather than vanilla
compression. The compression is a (expected) side-effect that I
thought worth studying, not the ends.

> I really don't understand this. Any n-dimentional data set can be
> turned into a linear string.

Sure, and this is what I do. But in real-world applications where
performance matters, data compression is a more complicated
implementation that frequently mandates sequential structure because
the storage is. You have many kinds of limitations that make the
theoretical impractical for many uses. If constrained to many standard
application parameters, vanilla compression would be more efficient.

The compression is a necessary theoretical consequence of my main
thrust. I've long used it as a kind of zero-knowledge proof of
implementation efficiency and correctness. I'm not in the data
compression business.

> If you are talking about remembering a phone number, it is clear that
> lossy is not useful.

Unfortunately, the brain is lossy, even with phone numbers.
Fortunately, we can easily selectively reinforce the memory of things
that we need to remember with high predictability.

j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT