**From:** Charles D Hixson (*charleshixsn@earthlink.net*)

**Date:** Mon Apr 24 2006 - 09:52:29 MDT

**Next message:**Charles D Hixson: "Re: Tech Hype"**Previous message:**Richard Loosemore: "Re: The Singularity vs. the Wall"**In reply to:**Philip Goetz: "Re: Fwd: We Can Understand Anything, But are Just a Bit Slow"**Next in thread:**Eliezer S. Yudkowsky: "Re: Fwd: We Can Understand Anything, But are Just a Bit Slow"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Sunday 23 April 2006 10:25, Philip Goetz wrote:

*> My intuition, based on experience with how much computational power it
*

*> takes to solve a problem of a particular size, and on Rescher's law of
*

*> logarithmic returns, is that exponentially-increasing computational
*

*> power is required to provide linear increase in "smartness", or some
*

*> measure of the problems we can handle. For instance, finding primes
*

*> of 2N bits takes much more than twice the computational power of
*

*> finding primes of N bits.
*

*>
*

*> I also expect that the computational complexity of cognition is an
*

*> exponential function of the size of working memory, so that if we
*

*> currently have a working memory that can store 5 chunks, the amount of
*

*> computation available in the universe limits us to some double-digit
*

*> number of chunks.
*

Quite plausibly. The question would be, however, "In what chunk complexity do

problems exist?" (tying it back into the thread topic). If problems of

complexity 5 are the highest that exist, then a limit of you only being able

to simultaneously process 5 chunks would not limit the problems whose

solution you could understand (though it *might* slow down the

comprehension). If, however, there were a problem that required

simultaneously dealing with 6 or 7 chunks, then you couldn't understand it no

matter how long you tried.

Now the proof of the four color theorum isn't, itself, proof of such

complexity. It could be that it's the sheer length of the proof that defeats

us. Do note, however, that symbolic logic is so structured that one only

considers two chunks at any one time. I.e., formal logic methods are either

of the form x = f(y) or of the form x = f(y, z). The required number of

parameters is the inherent chunk complexity. If it has been shown that

every function can be reduced to a form that is not more complex than those,

then chunk complexity isn't a limitation. I believe that it's been shown

that boolean functions can always be so decomposed...though I can't remember

*why* I believe this, but not all functions are boolean. (OTOH, computer

programming is a good indication that boolean functions can, at least, be a

good model for all other functions.)

I think that I've just argued that one can always exchange chunk complexity

for serial complexity. And serial complexity can be handled by external

libraries. Given that my (vague) arguments are correct, then the thread

proposal is correct (though "... Just a Bit Slow" seems like an

understatement.

The key question is CAN x = f (y, z, w) always be rewritten as x = g (y, h(z,

w) )? Not can it be done in a reasonably efficient way, but can it always be

done at all. And then can this be generalized to arbitrarily higher

dimensionality. If those are both correct, then the hypothesis is correct,

and my initial reaction was incorrect.

**Next message:**Charles D Hixson: "Re: Tech Hype"**Previous message:**Richard Loosemore: "Re: The Singularity vs. the Wall"**In reply to:**Philip Goetz: "Re: Fwd: We Can Understand Anything, But are Just a Bit Slow"**Next in thread:**Eliezer S. Yudkowsky: "Re: Fwd: We Can Understand Anything, But are Just a Bit Slow"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:56 MDT
*