From: James Rogers (jamesr@best.com)
Date: Thu Oct 09 2003 - 18:41:48 MDT
Hi folks,
I had an insight a several weeks ago that may or may not be novel, but a
cursory Google search doesn't find any obvious mention of it. I thought
some of the people who have more in-depth knowledge about neural
networks might be able to comment on this.
One of the things that always interested me is the apparent fact that
the brain is capable of executing complex algorithms without the benefit
of something that we take for granted on computers: dynamic memory
allocation for storing transient data. Nor are there large permanent
buffer structures in evidence that can be used for this purpose.
Some background: As most people on this list know, I work on a novel
class of mathematically-derived universal computer (roughly based in
algorithmic information theory) that shows a striking convergence with
what we know about biological neural networks, both structurally and
behaviorally. Something that I don't believe I've ever mentioned is
that it doesn't require dynamic memory to do computation, only for
additional links and the occasional node when encoding new patterns
(i.e. permanently learning).
One thing that has bothered me about the design is that I make use of a
small number of fixed FIFO-type buffers (arrays in practice) at some
"edges" inside special nodes ("special" because of the small buffer)
that mostly exist for synchronization and to make data streams play
nicely with the network structures but are critically important
functionally. From an algorithm elegance standpoint, these buffers are
cheating and everything about the brain suggests that no literal
permanent buffer structures that look like this are required for what it
does.
I spent some time thinking about this problem in general. I can't
imagine a functional universal computer without something equivalent to
transient or "in-flight" data storage area for various purposes, and I
still required a small number of memory buffers in one place which the
brain would suggest isn't strictly required.
It occurred to me that a similar problem was solved at the fuzzy
boundary of analog-digital electronics many years ago when digital was
very expensive. I don't know if these are used much any more, but for
what we would use digital memory buffers today, they used chains of
analog voltage "buckets" that would dump the voltage value down a wire
(often to another voltage bucket), with the dump timing and behavior
controlled by various external control voltage lines. These analog
voltage "bucket brigade" designs were used to synchronize analog signals
by inducing controllable temporal delays, and were in effect a mechanism
for doing quasi-buffering of an analog signal without using digital
memory as we normally think of it. Simple and primitive, but quite
effective for applications that needed flexible FIFO-type buffering
behaviors without using digital electronics. Using these primitives,
you can build interesting computational circuits that are neither analog
nor digital in a strict sense.
With this in mind, it turns out that I was essentially folding a complex
but otherwise ordinary buffer-less multi-node structure (roughly
organized in columns/chains) that behaved in a fashion virtually
identical to the old analog bucket brigade type circuits into the
current small data buffer for the sake of convenience. Elegance problem
solved.
When I went back and looked up some neuroscience papers on column and
chain structures in the brain, I noticed that there are a lot of
structures that look very similar to that whole family of analog buffer
circuits, and in the places I would normally expect to find these kinds
of buffering structures computationally. But the references to analog
buffers I've found (and only a small number of vaguely related papers
show up on Google), do not seem to be referring to complex multi-neuron
structures in the way I am. Looking at some papers, it seems that they
are looking at the same macro structures, but their thoughts are in the
wrong place conceptually to see that model pop out.
I don't deal with neural models much, so I would be interested in the
comments of someone who does. The more I look into it, the more I have a
hard time looking at it any other way. From where I'm sitting, these
columns of neurons should be able to provide all the dynamic memory
buffers required for computation, and they show up in all the right
places. What's more, you can infer a lot about the function of other
neural structures by how these things are arranged and connected in the
same way you would for similar types of circuits, so it would
potentially be useful for unraveling neural functions at a more macro
level.
The list has been slow for a while, so I thought I would throw an idea
grenade into the room that I'd actually like to hear comments on. As I
said, I'm far from the world's greatest expert on neural networks (and I
think the simulated ones are useless), so in some ways this concept is
only half-baked. But it did answer a lingering design/structure
question in my mind, so it can't be totally stupid.
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT