RE: Is generalisation a limit to intelligence?

From: Ben Goertzel (ben@intelligenesis.net)
Date: Sat Dec 02 2000 - 19:31:40 MST


> > But, the refutation: A sufficiently intelligent, self-aware system is
> > quite capable of modifying itself to make itself MORE ERROR-PRONE if it
> > finds through experimentation that this makes it more intelligent ;>
>
> Yes, but that isn't necessarily a solution. You might find that
> it will get
> stuck in a loop where it is at first too error-prone to realise that it's
> intelligent, and when it works out all the glitches it realises
> that it needs to
> be more error-prone, which brings it back to the beginning.

Obviously, this only will happen with a system that is very bad at
self-optimization
while in its maximally intelligent mode

It doesn't seem like a likely occurrence, though of course anything's
possible...

> > What I mean is that even if there is a LOT of data, and it's
> highly varied,
> > there is still a certain amount of overfitting that is inevitable.
>
> That can't be right. Take a single perceptron -- a very basic artificial
> neuron -- that classifies a set of points of types A and B,
> nearly linearly
> separable, in 2D space using a single line. With a lot of varied
> data in your
> training set, you can't get any overfitting using just a single neuron.
> Overfitting isn't inherent in all generalisations, it's just a
> result of having
> too sophisticated soft-/hardware to solve a certain problem. It's
> like trying
> too hard. The solution is just to not try at all or use minimal
> effort, in which
> case you'll have an acceptable generalisation.

You're right, if the model you're fitting has very few free parameters
and the data is complex, you won't overfit. My statement was confused.

My statement only holds true for complex models and, as you point out,
complexity is relative to the data itself here.

On the other hand, in all real intelligent systems I've worked with --
human or AI -- overfitting does occur. There is ample psychological
research showing that humans tend to jump to conclusions too much compared
to what rationality would suggest. And this is also the story of AI
in the financial domain.

Ultimately one wants a model with as many free parameters as there are
"implicit in the data." But this is not always known, or even well-defined!

> > On the other hand, the more memory you have, the more of this
> data you can
> > keep in mind for use for new model-building rounds based on new
> data combined
> > with the old. So the maximum-memory system will achieve the
> minimum amount
> > of possible overfitting given the data.
>
> I can't grasp this either. It goes totally against my concept of
> overfitting. I
> always thought that the more sophisticated method of
> generalisation, the worse
> results for easy problems.

This isn't really true. The problem is with intermediate levels of
sophistication.
I don't think I was confused here.

For example, in market prediction, one can assume a special boolean form for
trading
rules, in which case one has only a small number of rules to search through,
and overfitting
to one's data is not so likely.

Or, one can assume trading rules are general boolean functions, in which
case it's possible to
overfit one's data very badly. (Using this more sophisticated
generalization method.)

On the other hand, if one searches over the space of general boolean
functions for trading rules,
and does this VERY WELL, according to a criterion that balances
profitability and simplicity,
then one can get rules that perform better out-of-sample, on average, than
the results of assuming
special simple forms.

So in this case, which is a real-life example, sophisticated generalization
methods provide more
OPPORTUNITY for overfitting, but also provide the opportunity to avoid
overfitting by doing the
sophisticated analysis "right"

ben

>Which is why I think it's a limit to
> intelligence.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT