Timezone: »

Scalable Non-linear Learning with Adaptive Polynomial Expansions
Alekh Agarwal · Alina Beygelzimer · Daniel Hsu · John Langford · Matus J Telgarsky

Tue Dec 09 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D #None

Can we effectively learn a nonlinear representation in time comparable to linear learning? We describe a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations. The algorithm is designed for extreme computational efficiency, and an extensive experimental study shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.

Author Information

Alekh Agarwal (Microsoft Research)
Alina Beygelzimer (Yahoo Labs)
Daniel Hsu (Columbia University)
John Langford (Microsoft Research)

John Langford is a machine learning research scientist, a field which he says "is shifting from an academic discipline to an industrial tool". He is the author of the weblog hunch.net and the principal developer of Vowpal Wabbit. John works at Microsoft Research New York, of which he was one of the founding members, and was previously affiliated with Yahoo! Research, Toyota Technological Institute, and IBM's Watson Research Center. He studied Physics and Computer Science at the California Institute of Technology, earning a double bachelor's degree in 1997, and received his Ph.D. in Computer Science from Carnegie Mellon University in 2002. He was the program co-chair for the 2012 International Conference on Machine Learning.

Matus J Telgarsky (University of Michigan)

More from the Same Authors