Timezone: »

 
Is Importance Weighting Incompatible with Interpolating Classifiers?
Ke Alexander Wang · Niladri Chatterji · Saminul Haque · Tatsunori Hashimoto
Event URL: https://openreview.net/forum?id=pEhpLxVsd03 »

Importance weighting is a classic technique to handle distribution shifts. However, prior work has presented strong empirical and theoretical evidence demonstrating that importance weights can have little to no effect on overparameterized neural networks. \emph{Is importance weighting truly incompatible with the training of overparameterized neural networks?} Our paper answers this in the negative. We show that importance weighting fails not because of the overparameterization, but instead, as a result of using exponentially-tailed losses like the logistic or cross-entropy loss. As a remedy, we show that polynomially-tailed losses restore the effects of importance reweighting in correcting distribution shift in overparameterized models. We characterize the behavior of gradient descent on importance weighted polynomially-tailed losses with overparameterized linear models, and theoretically demonstrate the advantage of using polynomially-tailed losses in a label shift setting. Surprisingly, our theory shows that using weights that are obtained by exponentiating the classical unbiased importance weights can improve performance. Finally, we demonstrate the practical value of our analysis with neural network experiments on a subpopulation shift and a label shift dataset. Our polynomially-tailed loss consistently increases the test accuracy by 2-3%.

Author Information

Ke Alexander Wang (Stanford University)
Niladri Chatterji (Stanford University)
Saminul Haque (University of Toronto)
Tatsunori Hashimoto (Stanford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
    Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto
  • 2021 : Extending the WILDS Benchmark for Unsupervised Adaptation »
    Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Ian Stavness · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang
  • 2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
    Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto
  • 2021 : Spotlight Talk 3 »
    Ke Alexander Wang
  • 2021 Poster: On the Theory of Reinforcement Learning with Once-per-Episode Feedback »
    Niladri Chatterji · Aldo Pacchiano · Peter Bartlett · Michael Jordan
  • 2020 Poster: Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints »
    Marc Finzi · Ke Alexander Wang · Andrew Wilson
  • 2020 Spotlight: Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints »
    Marc Finzi · Ke Alexander Wang · Andrew Wilson
  • 2019 : Lunch Break and Posters »
    Xingyou Song · Elad Hoffer · Wei-Cheng Chang · Jeremy Cohen · Jyoti Islam · Yaniv Blumenfeld · Andreas Madsen · Jonathan Frankle · Sebastian Goldt · Satrajit Chatterjee · Abhishek Panigrahi · Alex Renda · Brian Bartoldson · Israel Birhane · Aristide Baratin · Niladri Chatterji · Roman Novak · Jessica Forde · YiDing Jiang · Yilun Du · Linara Adilova · Michael Kamp · Berry Weinstein · Itay Hubara · Tal Ben-Nun · Torsten Hoefler · Daniel Soudry · Hsiang-Fu Yu · Kai Zhong · Yiming Yang · Inderjit Dhillon · Jaime Carbonell · Yanqing Zhang · Dar Gilboa · Johannes Brandstetter · Alexander R Johansen · Gintare Karolina Dziugaite · Raghav Somani · Ari Morcos · Freddie Kalaitzis · Hanie Sedghi · Lechao Xiao · John Zech · Muqiao Yang · Simran Kaur · Qianli Ma · Yao-Hung Hubert Tsai · Ruslan Salakhutdinov · Sho Yaida · Zachary Lipton · Daniel Roy · Michael Carbin · Florent Krzakala · Lenka Zdeborová · Guy Gur-Ari · Ethan Dyer · Dilip Krishnan · Hossein Mobahi · Samy Bengio · Behnam Neyshabur · Praneeth Netrapalli · Kris Sankaran · Julien Cornebise · Yoshua Bengio · Vincent Michalski · Samira Ebrahimi Kahou · Md Rifat Arefin · Jiri Hron · Jaehoon Lee · Jascha Sohl-Dickstein · Samuel Schoenholz · David Schwab · Dongyu Li · Sang Keun Choe · Henning Petzka · Ashish Verma · Zhichao Lin · Cristian Sminchisescu
  • 2019 : Break / Poster Session 1 »
    Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang
  • 2019 Poster: Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks »
    Qiyang Li · Saminul Haque · Cem Anil · James Lucas · Roger Grosse · Joern-Henrik Jacobsen
  • 2019 Poster: Exact Gaussian Processes on a Million Data Points »
    Ke Alexander Wang · Geoff Pleiss · Jacob Gardner · Stephen Tyree · Kilian Weinberger · Andrew Gordon Wilson
  • 2018 Poster: A Retrieve-and-Edit Framework for Predicting Structured Outputs »
    Tatsunori Hashimoto · Kelvin Guu · Yonatan Oren · Percy Liang
  • 2018 Oral: A Retrieve-and-Edit Framework for Predicting Structured Outputs »
    Tatsunori Hashimoto · Kelvin Guu · Yonatan Oren · Percy Liang
  • 2017 Poster: Alternating minimization for dictionary learning with random initialization »
    Niladri Chatterji · Peter Bartlett
  • 2017 Poster: Unsupervised Transformation Learning via Convex Relaxations »
    Tatsunori Hashimoto · Percy Liang · John Duchi