Workshop
Nearest Neighbors for Modern Applications with Massive Data: An Age-old Solution with New Challenges
George H Chen · Devavrat Shah · Christina Lee

Fri Dec 8th 08:00 AM -- 06:30 PM @ 201 B
Event URL: https://nn2017.mit.edu »

Many modern methods for prediction leverage nearest neighbor (NN) search to find past training examples most similar to a test example, an idea that dates back in text to at least the 11th century in the “Book of Optics” by Alhazen. Today, NN methods remain popular, often as a cog in a bigger prediction machine, used for instance in recommendation systems, forecasting baseball player performance and election outcomes, survival analysis in healthcare, image in-painting, crowdsourcing, graphon estimation, and more. The popularity of NN methods is due in no small part to the proliferation of high-quality fast approximate NN search methods that scale to high-dimensional massive datasets typical of contemporary applications. Moreover, NN prediction readily pairs with methods that learn similarities, such as metric learning methods or Siamese networks. In fact, some well-known pairings that result in nearest neighbor predictors that learn similarities include random forests and many boosting methods.

Despite the popularity, success, and age of nearest neighbor methods, our theoretical understanding of them is still surprisingly incomplete (perhaps much to the chagrin of the initial efforts of analysis by Fix, Hodges, Cover, and Hart) and can also be disconnected from what practitioners actually want or care about. Many successful approximate nearest neighbor methods in practice do not have known theoretical guarantees, and many of the guarantees for exact nearest neighbor methods do not readily handle approximation. Meanwhile, many applications use variations on NN methods, for which existing theory may not extend to, or for which existing theory is not easily usable by a practitioner. Suffice it to say, a lot is lost in translation between different communities working with NN methods.

In short, NN methods is an exciting field at the intersection of classical statistics, machine learning, data structures and domain specific expertise. The aim of this work is to bring together theoreticians and practitioners alike from these various different backgrounds with a diverse range of perspectives to bring everyone up to speed on:
- Best known statistical/computational guarantees (especially recent non-asymptotic results)
- Latest methods/systems that have been developed especially for fast approximate NN search that scale to massive datasets
- Various applications in which NN methods are heavily used as a critical component in prediction or inference

By gathering a diverse crowd, we hope attendees share their perspectives, identify ways to bridge theory and practice, and discuss avenues of future research.

09:00 AM A millennium of nearest neighbor methods – an introduction to the NIPS nearest neighbor workshop 2017 (Talk) George H Chen
09:20 AM Analyzing Robustness of Nearest Neighbors to Adversarial Examples (Talk) Kamalika Chaudhuri
10:05 AM New perspective from Blackwell's "comparisons of experiments" on generative adversarial networks (Talk) Sewoong Oh
11:00 AM Data-dependent methods for similarity search in high dimensions (Talk) Piotr Indyk
11:45 AM Fast k-Nearest Neighbor Search via Prioritized DCI (Talk) Ke Li
01:45 PM How to stop worrying and learn to love Nearest Neighbors (Talk) Alexei Efros
02:30 PM A Platform for Data Science (Talk) Vishal Doshi
02:50 PM Poster Session (encompasses coffee break) (Poster Session)
Beidi Chen, Borja Balle, Daniel Lee, iuri frosio, Jitendra Malik, Jan Kautz, Ke Li, Masashi Sugiyama, Miguel A. Carreira-Perpinan, Ramin Raziperchikolaei, Theja Tulabandhula, Yung-Kyun Noh, Adams Wei Yu
04:15 PM The Boundary Forest Algorithm (Talk) Jonathan Yedidia
05:00 PM Iterative Collaborative Filtering for Sparse Matrix Estimation (Talk) Christina Lee

Author Information

George H Chen (Carnegie Mellon University)

George Chen is an assistant professor of information systems at Carnegie Mellon University. He works on nonparametric prediction methods, applied to healthcare and sustainable development. He received his PhD from MIT in Electrical Engineering and Computer Science.

Devavrat Shah (Massachusetts Institute of Technology)

Devavrat Shah is a professor of Electrical Engineering & Computer Science and Director of Statistics and Data Science at MIT. He received PhD in Computer Science from Stanford. He received Erlang Prize from Applied Probability Society of INFORMS in 2010 and NeuIPS best paper award in 2008.

Christina Lee (Microsoft Research)

More from the Same Authors