Timezone: »
Deep Neural Networks are powerful models that attained remarkable results on a variety of tasks. These models are shown to be extremely efficient when training and test data are drawn from the same distribution. However, it is not clear how a network will act when it is fed with an out-of-distribution example. In this work, we consider the problem of out-of-distribution detection in neural networks. We propose to use multiple semantic dense representations instead of sparse representation as the target label. Specifically, we propose to use several word representations obtained from different corpora or architectures as target labels. We evaluated the proposed model on computer vision, and speech commands detection tasks and compared it to previous methods. Results suggest that our method compares favorably with previous work. Besides, we present the efficiency of our approach for detecting wrongly classified and adversarial examples.
Author Information
Gabi Shalev (Dept. of Computer Science, Bar-Ilan University)
Yossi Adi (Bar Ilan University)
Yossi Keshet (Bar-Ilan University)
More from the Same Authors
-
2017 Poster: Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples »
Moustapha Cisse · Yossi Adi · Natalia Neverova · Joseph Keshet -
2013 Poster: Learning Efficient Random Maximum A-Posteriori Predictors with Non-Decomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola -
2011 Poster: Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss »
David Mcallester · Joseph Keshet -
2011 Oral: Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss »
David Mcallester · Joseph Keshet -
2010 Poster: Direct Loss Minimization for Structured Prediction »
David A McAllester · Tamir Hazan · Joseph Keshet -
2008 Poster: Suppport Vector Machines with a Reject Option »
Yves Grandvalet · Joseph Keshet · Alain Rakotomamonjy · Stephane Canu