**From:** James Rogers (*jamesr@best.com*)

**Date:** Sun Dec 03 2000 - 00:59:50 MST

**Next message:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Previous message:**Eliezer S. Yudkowsky: "Re: Is generalisation a limit to intelligence?"**In reply to:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Next in thread:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Reply:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Sat, 02 Dec 2000, Ben Goertzel wrote:

*>
*

*> However, we lack a quantitative science that can tell us exactly how quickly
*

*> the error rate approaches zero as the memory (&, in a real-time
*

*> situation, processing power able to exploit this memory) approaches
*

*> infinity. Eliezer and I differ in that I believe such a science will
*

*> someday exist ; We also differ in that he intuits this error rate
*

*> approaches zero faster than I intuit it does.
*

I recently joined this list and have been watching this

thread with some interest. Some of the discussion seems odd, if only

because I am approaching this issue from a completely different angle.

A lot of this quantitative science (mathematics really) has already been

done. Or at least, a lot more has been done than is apparently assumed by

some of what I have seen written on this list.

Information theoretic approaches have already demonstrated much of what is

being questioned, or at least insofar as finite-state machines are

concerned. Generally speaking, given a finite amount of memory and an

arbitrarily long sequence of data (generated by any finite state machine

no matter how complex), it is possible to attain the minimum possible

predictive error rate using universal prediction schemes. An optimal

prediction scheme can be algorithmically generated and the error rate

figured for any data generated by finite-state machinery. <much lengthy

theory omitted> In short, it has been demonstrated that for any finite

state machine, it is possible to ascertain the minimum possible predictive

error rate for any data sequence given any finite amount of memory. An

optimal prediction scheme will typically approach the theoretical error

limit quite fast. However, sub-optimal prediction schemes, nonparametric

or unknown models, and similar types of situations may approach their

theoretical error rates quite slowly. It would be trivial for a

computer today to calculate error rates for any optimal universal

predictive scheme. These would seem to answer the above question and

quite a few others I've seen on this thread. The only glaring exception

to the above is if AIs don't run on finite state machinery.

Among the interesting things that have been shown with respect to this is

that humans are quite apparently finite state-machines. The first example

of this was Hagelbarger at Bell Labs (and later Claude Shannon), who first

demonstrated that humans are apparently unable to generate truly random

sequences of any kind; computers using information theoretic prediction

algorithms were able to successfully predict the behavior of humans

intentionally attempting to generate random data, with an error rate many,

many orders of magnitude below what would be expected if the human

participants were actually generating random data.

I've actually been using information theoretic approaches in my engines

for several years now, and with generally superb results across many

fields. It has been widely rumored that Claude Shannon made his fortune by

"working" the stock market (as an aside, a couple years ago I calculated

that running an optimal predictive engine against the entire NASDAQ in

realtime, based on the best engine I had produced to date, would require a

machine capable of 10^11 Flops sustained. The amount of memory was

reasonably attainable though.) I've found it odd that information theory

is routinely overlooked in AI research since it provides such a solid

foundation for the mathematical basis of the topic.

I am currently working on putting together a website with a lot of

the theory and actual application of my work, quite a few parts of which

have been applied in the commercial sector. Splitting my time between

this, audio signal analysis/processing research, and something resembling a

9-5 has me strapped for time, but hopefully I will get some descriptive and

more in-depth articles published on my website relatively soon. I've been

working on adaptive and self-learning systems for many years (though I

only really started working on AI when it was clear that a lot of the

research and development I was doing was very applicable to that

particular domain -- my original interests were some neglected areas of

database theory).

Regards,

-James Rogers

jamesr@best.com

**Next message:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Previous message:**Eliezer S. Yudkowsky: "Re: Is generalisation a limit to intelligence?"**In reply to:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Next in thread:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Reply:**Ben Goertzel: "RE: Is generalisation a limit to intelligence?"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:35 MDT
*