**From:** Neil H. (*neuronexmachina@gmail.com*)

**Date:** Mon Nov 13 2006 - 19:31:08 MST

**Next message:**Eliezer S. Yudkowsky: "International Earth-Destruction Advisory Board"**Previous message:**Joel Pitt: "Re: "The Netflix challenge and the advance of Science""**In reply to:**Eliezer S. Yudkowsky: ""The Netflix challenge and the advance of Science""**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On 11/13/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:

*> http://www.netflixprize.com/community/viewtopic.php?id=401
*

*>
*

*> This is a forum devoted to the Netflix Prize, $1 million for producing a
*

*> collaborative filtering algorithm 10% better than Netflix's. The
*

*> current leading contenders are edging up on 5% better than Netflix's
*

*> algorithm, corresponding to a root mean squared error of .90. (I
*

*> haven't taken a potshot at this problem yet, but it's quite interesting
*

*> to see how things go. Right now, the current leading algorithm, beating
*

*> out many serious contenders, is apparently one that was rejected from
*

*> the NIPS conference as uninteresting. Hence the name, "NIPS Reject".)
*

There's a neat thread on that forum which tells a little bit about who

the people on the leaderboard are:

http://www.netflixprize.com/community/viewtopic.php?id=368

It seems that "NIPS Reject" is a PhD student of Geoff Hinton, a

well-known figure in the neural-networks community. I don't know if

this is the same work, but they published a Science paper a few months

ago, on "Reducing the Dimensionality of Data with Neural Networks":

http://www.cs.toronto.edu/~rsalakhu/papers/science.pdf

http://www.cs.toronto.edu/~rsalakhu/papers/perspective.pdf

http://www.cs.toronto.edu/~rsalakhu/

I actually hadn't seen this paper before -- it's nice to see that

after all these years somebody's managed to tame autoencoder networks

into do something practical. From the end of the accompanying

Perspective article:

"This makes it practical to use much deeper networks than were

previously possible, thus

allowing more complex nonlinear codes to be learned. Although there is

an engineering flavor to much of the paper, this is the first

practical method that results in a completely invertible mapping, so

that new data may be projected into this very low dimensional space.

The hope is that these lower dimensional representations will be

useful for important tasks

such as pattern recognition, transformation, or visualization. Hinton

and Salakhutdinov have already demonstrated some excellent results in

widely varying domains. This is exciting work with many potential

applications in domains of current interest such as biology,

neuroscience, and the study of the Web.

"Recent advances in machine learning have caused some to consider

neural networks obsolete, even dead. This work suggests that such

announcements are premature."

-- Neil

**Next message:**Eliezer S. Yudkowsky: "International Earth-Destruction Advisory Board"**Previous message:**Joel Pitt: "Re: "The Netflix challenge and the advance of Science""**In reply to:**Eliezer S. Yudkowsky: ""The Netflix challenge and the advance of Science""**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:57 MDT
*