Skip to yearly menu bar Skip to main content


Workshop

INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond

Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li

Room 393

Goals

Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis. This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. Suggested topics include, but are not limited to the intersection between interpolation regularizers and:

* Domain generalization
* Semi-supervised learning
* Privacy-preserving ML
* Theory
* Robustness
* Fairness
* Vision
* NLP
* Medical applications

## Important dates

* Paper submission deadline: September 22, 2022
* Paper acceptance notification: October 14, 2022
* Workshop: December 2, 2022

## Call for papers

Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions. We welcome submissions that have been already published during COVID in order to foster discussion. The venue of publication should be clearly indicated during submission for such papers. Submission Link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/INTERPOLATE

## Invited Speakers

Chelsea Finn, form Stanford, on "Repurposing Mixup for Robustness and Regression"
Sanjeev Arora, from Princeton, on "Using Interpolation Ideas to provide privacy in Federated Learning settings"
Kenji Kawaguchi, from NUS, on "The developments of the theory of Mixup"
Youssef Mroueh, from IBM, on "Fairness and mixing"
Alex Lamb, from MSR, on "What matters in the world? Exploring algorithms for provably ignoring irrelevant details"

Chat is not available.
Timezone: America/Los_Angeles

Schedule