`

Timezone: »

 
Exploring Covariate and Concept Shift for Out-of-Distribution Detection
Junjiao Tian · Yen-Chang Hsu · Yilin Shen · Hongxia Jin · Zsolt Kira
Event URL: https://openreview.net/forum?id=3AWGg4CySNh »

The modeling of what a neural network does not know -- i.e. uncertainty -- is fundamentally important both in terms of theory and practice. This is especially true as the model encounters distribution shift during inference. Bayesian inference has been regarded as the most principled method of uncertainty modeling because it explicitly models two types of uncertainty: \textit{epistemic uncertainty} and aleatoric uncertainty in the form posteriors over parameters and data likelihood respectively. Epistemic uncertainty captures the uncertainty of model parameters due to lack of data, while aleatoric uncertainty captures inherent data ambiguity.Practically, epistemic uncertainty is often assessed by a model's out-of-distribution (OOD) detection performance or calibration, while aleatoric uncertainty can be assessed by in-distribution error detection. Recent attempts to model uncertainty using deterministic models failed to disentangle these two uncertainties due to their non-Bayesian nature. However, it is still possible to capture them empirically in a deterministic model using a combination of density estimation and softmax-entropy. This leaves us the question: how to approach OOD detection/calibration for deterministic (as opposed to Bayesian) and discriminative (as opposed to generative) models? This is arguably the most widely used class of models due to its speed (compared to Bayesian models) and simplicity (compared to generative models). It seems that the conventional association of OOD data with epistemic uncertainty fails under the scope of this type of models, specifically because it does not reason about what has changed in the input distribution and the mechanisms through which these changes affect neural networks and a different perspective is needed to analyze them.

Author Information

Junjiao Tian (Georgia Institute of Technology)
Yen-Chang Hsu (Georgia Institute of Technology)
Yilin Shen (Samsung Research America)
Hongxia Jin (Samsung Research America)
Zsolt Kira (Georgia Institute of Techology)

More from the Same Authors

  • 2021 Spotlight: Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
    Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · John Turner · Noah Maestre · Mustafa Mukadam · Devendra Singh Chaplot · Oleksandr Maksymets · Aaron Gokaslan · Vladimír Vondruš · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra
  • 2021 Spotlight: A Geometric Perspective towards Neural Calibration via Sensitivity Decomposition »
    Junjiao Tian · Dylan Yung · Yen-Chang Hsu · Zsolt Kira
  • 2021 : Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
    Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · Noah Maestre · Mustafa Mukadam · Oleksandr Maksymets · Aaron Gokaslan · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra
  • 2021 : Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
    Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · Noah Maestre · Mustafa Mukadam · Oleksandr Maksymets · Aaron Gokaslan · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra
  • 2021 Poster: A Geometric Perspective towards Neural Calibration via Sensitivity Decomposition »
    Junjiao Tian · Dylan Yung · Yen-Chang Hsu · Zsolt Kira
  • 2021 Poster: Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
    Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · John Turner · Noah Maestre · Mustafa Mukadam · Devendra Singh Chaplot · Oleksandr Maksymets · Aaron Gokaslan · Vladimír Vondruš · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra
  • 2020 Poster: Posterior Re-calibration for Imbalanced Datasets »
    Junjiao Tian · Yen-Cheng Liu · Nathaniel Glaser · Yen-Chang Hsu · Zsolt Kira
  • 2019 Poster: Reward Constrained Interactive Recommendation with Natural Language Feedback »
    Ruiyi Zhang · Tong Yu · Yilin Shen · Hongxia Jin · Changyou Chen
  • 2018 : Lunch & Posters »
    Haytham Fayek · German Parisi · Brian Xu · Pramod Kaushik Mudrakarta · Sophie Cerf · Sarah Wassermann · Davit Soselia · Rahaf Aljundi · Mohamed Elhoseiny · Frantzeska Lavda · Kevin J Liang · Arslan Chaudhry · Sanmit Narvekar · Vincenzo Lomonaco · Wesley Chung · Michael Chang · Ying Zhao · Zsolt Kira · Pouya Bashivan · Banafsheh Rafiee · Oleksiy Ostapenko · Andrew Jones · Christos Kaplanis · Sinan Kalkan · Dan Teng · Xu He · Vincent Liu · Somjit Nath · Sungsoo Ahn · Ting Chen · Shenyang Huang · Yash Chandak · Nathan Sprague · Martin Schrimpf · Tony Kendall · Jonathan Schwarz · Michael Li · Yunshu Du · Yen-Chang Hsu · Samira Abnar · Bo Wang