From: Keith Henson (hkhenson@rogers.com)
Date: Sun Mar 07 2004 - 18:07:36 MST
At 10:11 AM 07/03/04 -0800, andrew wrote:
>On Mar 6, 2004, at 8:00 PM, Keith Henson wrote:
>>We probably get away with relatively little information storage because
>>we can take a tiny vague memory and fill it in with lots of processing.
>
>You cannot substitute time complexity for space complexity, and space
>complexity generally defines intelligence. You can substitute space
>complexity for time complexity, but not the other way around.
I can't parse this. It seems likely that you have something here that is
worth understanding. Can you try again from a bit lower level?
>I think the brain storage mechanism is far more efficient in an
>information theoretic sense than I think a lot of people think it is.
I hope you are right, because I don't like the conclusions of Thomas
Landauer's research. But I really don't see how to refute him.
>I'm actually working on a short paper on this (for the website), because
>I've done a lot of really interesting tangential work on data compression
>related to the AGI work I do. In short, it is possible to do lossless,
>random-access, high-compression information representation in an
>algorithmic structure that looks very similar to neural structures.
I haven't got the slightest idea of how you map a mathematical structure
into a physical one or compare them. Perhaps you could expand on this and
give an example?
>What is even more interesting, and what I actually am working on writing
>up, is that if you modulate the encoding process on the front-end with a
>simple "transition detection" signaling mechanism (e.g. edge detection), I
>can get effective compression ratios that on average exceed the best
>compression algorithms out there.
That's an awesome claim. It is of considerable interest to me in a
business sense because of the badge camera.
>Perhaps ironically, it isn't terribly useful for many generic data
>compression applications because most hardware storage mediums are purely
>sequential access beasts, and sequential or semi-sequential compression
>algorithms are much better suited for such things.
I really don't understand this. Any n-dimentional data set can be turned
into a linear string.
>But generally, I think that our noggins encode far more information than a
>casual analysis of bits suggests.
You should read the original paper, it was anything but casual being a meta
analysis of a bunch of previous studies.
>The "bits" should be bits in more of a Kolmogorov complexity sense than
>the naive sequential storage format sense, and there is a very large chasm
>between the two in effective capacity. Filtering out low-value bits (i.e.
>making it lossy) greatly extends to the useful storage range even further.
If you are talking about remembering a phone number, it is clear that lossy
is not useful.
Keith Henson
>j. andrew rogers
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT