From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sat Dec 02 2000 - 21:03:17 MST
"Joaquim Almgren Gāndara" wrote:
> I can't grasp this either. It goes totally against my concept of overfitting. I
> always thought that the more sophisticated method of generalisation, the worse
> results for easy problems. Which is why I think it's a limit to intelligence.
Whaaaat? The more sophisticated the method of generalization, the better
the results for easy problems. And hard ones. If you'd said "The more
data abstracted out" that'd be one thing, but this, I don't get at all.
Sophisticated methods of generalization do not necessarily abstract more
data. Rather, sophisticated generalizations abstract irrelevant data out
while leaving relevant data in. If you abstract relevant data or leave in
irrelevant data, then the method is unsophisticated.
If perfect results are gotten, it implies that the AI has absorbed the raw
data into the predictive procedure, meaning that its definition has
changed from the general to the specific. Models that *do* exhibit
perfect fits to training data *are probably* less accurate about future
data, *for a constant level of intelligence*. It doesn't mean that models
which exhibit better fits *because they are more intelligent* will exhibit
worse results for future data. You can't induce a limit on intelligence
from your observed experiences with data fitting - you are, as it were,
abstracting relevant context and leaving in irrelevant data.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT