From: J. Andrew Rogers (firstname.lastname@example.org)
Date: Sun Mar 07 2004 - 11:11:07 MST
On Mar 6, 2004, at 8:00 PM, Keith Henson wrote:
> We probably get away with relatively little information storage
> because we can take a tiny vague memory and fill it in with lots of
You cannot substitute time complexity for space complexity, and space
complexity generally defines intelligence. You can substitute space
complexity for time complexity, but not the other way around.
I think the brain storage mechanism is far more efficient in an
information theoretic sense than I think a lot of people think it is.
I'm actually working on a short paper on this (for the website),
because I've done a lot of really interesting tangential work on data
compression related to the AGI work I do. In short, it is possible to
do lossless, random-access, high-compression information representation
in an algorithmic structure that looks very similar to neural
What is even more interesting, and what I actually am working on
writing up, is that if you modulate the encoding process on the
front-end with a simple "transition detection" signaling mechanism
(e.g. edge detection), I can get effective compression ratios that on
average exceed the best compression algorithms out there. Perhaps
ironically, it isn't terribly useful for many generic data compression
applications because most hardware storage mediums are purely
sequential access beasts, and sequential or semi-sequential compression
algorithms are much better suited for such things.
But generally, I think that our noggins encode far more information
than a casual analysis of bits suggests. The "bits" should be bits in
more of a Kolmogorov complexity sense than the naive sequential storage
format sense, and there is a very large chasm between the two in
effective capacity. Filtering out low-value bits (i.e. making it
lossy) greatly extends to the useful storage range even further.
j. andrew rogers
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT