Skip to yearly menu bar Skip to main content


( events)   Timezone: America/Los_Angeles  
Workshop
Fri Dec 02 06:30 AM -- 04:00 PM (PST) @ Room 393 None
INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond
Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li





Workshop Home Page

Goals

Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis. This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. Suggested topics include, but are not limited to the intersection between interpolation regularizers and:

* Domain generalization
* Semi-supervised learning
* Privacy-preserving ML
* Theory
* Robustness
* Fairness
* Vision
* NLP
* Medical applications

## Important dates

* Paper submission deadline: September 22, 2022
* Paper acceptance notification: October 14, 2022
* Workshop: December 2, 2022

## Call for papers

Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions. We welcome submissions that have been already published during COVID in order to foster discussion. The venue of publication should be clearly indicated during submission for such papers. Submission Link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/INTERPOLATE

## Invited Speakers

Chelsea Finn, form Stanford, on "Repurposing Mixup for Robustness and Regression"
Sanjeev Arora, from Princeton, on "Using Interpolation Ideas to provide privacy in Federated Learning settings"
Kenji Kawaguchi, from NUS, on "The developments of the theory of Mixup"
Youssef Mroueh, from IBM, on "Fairness and mixing"
Alex Lamb, from MSR, on "What matters in the world? Exploring algorithms for provably ignoring irrelevant details"

Opening Remarks (Remarks)
Youssef Mroueh on Interpolating for fairness (Invited Talk)
Sanjeev Arora on Using Interpolation to provide privacy in Federated Learning settings (Invited Talk)
Chelsea Finn on Repurposing Mixup for Robustness and Regression (Invited Talk)
Panel discussion I (Discussion Panel)
Lunch with random mixing group and organizers (Break)
Kenji Kawaguchi on The developments of the theory of Mixup (Invited Talk)
Alex Lamb on Latent Data Augmentation for Improved Generalization (Invited Talk)
Gabriel Ilharco on Robust and accurate fine-tuning by interpolating weights (Invited Talk)
Panel II (Discussion Panel)
Poster Session (Posters)
Closing Remarks (Remarks)
Effect of mixup Training on Representation Learning (Poster)
FedLN: Federated Learning with Label Noise (Poster)
GroupMixNorm Layer for Learning Fair Models (Poster)
SMILE: Sample-to-feature MIxup for Efficient Transfer LEarning (Poster)
Over-Training with Mixup May Hurt Generalization (Poster)
Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint (Poster)
Interpolating Compressed Parameter Subspaces (Poster)
Momentum-based Weight Interpolation of Strong Zero-Shot Models for Continual Learning (Poster)
Overparameterization Implicitly Regularizes Input-Space Smoothness (Poster)
Covariate Shift Detection via Domain Interpolation Sensitivity (Poster)
LSGANs with Gradient Regularizers are Smooth High-dimensional Interpolators (Poster)
AlignMixup: Improving Representations By Interpolating Aligned Features (Poster)
Pre-train, fine-tune, interpolate: a three-stage strategy for domain generalization (Poster)
Improving Domain Generalization with Interpolation Robustness (Poster)
Differentially Private CutMix for Split Learning with Vision Transformer (Poster)
Sample Relationships through the Lens of Learning Dynamics with Label Information (Poster)
Mixed Samples Data Augmentation with Replacing Latent Vector Components in Normalizing Flow (Poster)
On Data Augmentation and Consistency-based Semi-supervised Relation Extraction (Poster)
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models (Poster)
Mixup for Robust Image Classification - Application in Continuously Transitioning Industrial Sprays (Poster)
Contributed Spotlights (Oral)