RE: Is generalisation a limit to intelligence?

From: Ben Goertzel (ben@webmind.com)
Date: Sat Dec 02 2000 - 12:14:17 MST


> How do you force an AI to make generalisations when it's
> so much easier
> to "look it up"? Are you saying that the AI should just look up
> the exceptions
> to its generalisations whenever needed -- or rather, check if there's an
> exception whenever it uses a generalisation?

I'm saying that, when dealing with a situation quite different to one that's
been encountered before, generalizations are the best guidance.

However, with a big memory, one can store a lot of particulars as well as
generalizations,
and then try to make the right decision in each situation as to whether to
apply general
or particular knowledge to that situation.

With a limited memory, one doesn't have the choice, because one must throw
out most
particulars...

Perhaps you're assuming that erroneousness itself adds some useful creative
"noise"
to the thought process, whereas a system with a good enough memory won't
make enough
errors to lead to creative thoughts. It's possible. Again, we lack the
science to
quantify this effect.

> > Lacking infinite memory, some error is necessary due to
> overgeneralization
> > (overfitting).
>
> Sorry, I obviously didn't explain overfitting properly. With
> "overfitting", I
> meant the act of taking the data too literally and not
> generalising at all.

Yeah, trust me, I understand the concept of overfitting -- I've done
empirical work with financial
trading systems for the last 3 years.

My statement was not careful enough. Even with infinite memory, some
overfitting is necessary
if one is generalizing from a finite history. However, if there are serious
memory limitations as well
as data limitations, then overfitting is even MORE severe, because one's
memory can't even hold all
the available data, only generalizations formed from the data. With memory
limitations one is overfitting
to an even smaller subset of the data than is necessary.

For instance, in financial work, some overfitting is necessary because the
market can always move into
a new regime, not covered by the patterns from the past. But it's also true
that if one incorporates
additional data into one's model (derived from text or non-financial
economic indicators or foreign markets
for example), one can come up with more robust models, because one is
fitting to more data. IN a data-rich
situation, more memory decreases the severity of overfitting, in general.

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT