Timezone: »
The tremendous success of generative models in recent years raises the question of whether they can also be used to perform classification. Generative models have been used as adversarially robust classifiers on simple datasets such as MNIST, but this robustness has not been observed on more complex datasets like CIFAR-10. Additionally, on natural image datasets, previous results have suggested a trade-off between the likelihood of the data and classification accuracy. In this work, we investigate score-based generative models as classifiers for natural images. We show that these models not only obtain competitive likelihood values but simultaneously achieve state-of-the-art classification accuracy for generative classifiers on CIFAR-10. Nevertheless, we find that these models are only slightly, if at all, more robust than discriminative baseline models on out-of-distribution tasks based on common image corruptions. Similarly and contrary to prior results, we find that score-based are prone to worst-case distribution shifts in the form of adversarial perturbations. Our work highlights that score-based generative models are closing the gap in classification accuracy compared to standard discriminative models. While they do not yet deliver on the promise of adversarial and out-of-domain robustness, they provide a different approach to classification that warrants further research.
Author Information
Roland S. Zimmermann (University of Tübingen, International Max Planck Research School for Intelligent Systems)
Lukas Schott (University of Tuebingen)
Yang Song (Stanford University)
Benjamin Dunn (Norwegian Institute of Technology)
David Klindt (NTNU)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : Score-Based Generative Classifiers »
Dates n/a. Room
More from the Same Authors
-
2021 Spotlight: Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience »
Dominic Gonschorek · Larissa Höfling · Klaudia P. Szatko · Katrin Franke · Timm Schubert · Benjamin Dunn · Philipp Berens · David Klindt · Thomas Euler -
2021 Spotlight: How Well do Feature Visualizations Support Causal Understanding of CNN Activations? »
Roland S. Zimmermann · Judy Borowski · Robert Geirhos · Matthias Bethge · Thomas Wallis · Wieland Brendel -
2021 Spotlight: Maximum Likelihood Training of Score-Based Diffusion Models »
Yang Song · Conor Durkan · Iain Murray · Stefano Ermon -
2022 : Topological Ensemble Detection with Differentiable Yoking »
David Klindt · Sigurd Gaukstad · Erik Hermansen · Melvin Vaupel · Benjamin Dunn -
2023 : Scale Alone Does not Improve Mechanistic Interpretability in Vision Models »
Roland S. Zimmermann · Thomas Klein · Wieland Brendel -
2023 : Evaluation of Representational Similarity Scores Across Human Visual Cortex »
Francisco Acosta · Colin Conwell · David Klindt · Nina Miolane -
2023 : Evaluation of Representational Similarity Scores Across Human Visual Cortex »
Francisco Acosta · Colin Conwell · David Klindt · Nina Miolane -
2023 Poster: Scale Alone Does not Improve Mechanistic Interpretability in Vision Models »
Roland S. Zimmermann · Thomas Klein · Wieland Brendel -
2022 : Keynote Talk 1 »
Yang Song -
2022 Workshop: NeurIPS 2022 Workshop on Score-Based Methods »
Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat -
2022 Poster: Increasing Confidence in Adversarial Robustness Evaluations »
Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini -
2021 Poster: How Well do Feature Visualizations Support Causal Understanding of CNN Activations? »
Roland S. Zimmermann · Judy Borowski · Robert Geirhos · Matthias Bethge · Thomas Wallis · Wieland Brendel -
2021 Poster: Imitation with Neural Density Models »
Kuno Kim · Akshat Jindal · Yang Song · Jiaming Song · Yanan Sui · Stefano Ermon -
2021 Poster: Estimating High Order Gradients of the Data Distribution by Denoising »
Chenlin Meng · Yang Song · Wenzhe Li · Stefano Ermon -
2021 Poster: Maximum Likelihood Training of Score-Based Diffusion Models »
Yang Song · Conor Durkan · Iain Murray · Stefano Ermon -
2021 Poster: Pseudo-Spherical Contrastive Divergence »
Lantao Yu · Jiaming Song · Yang Song · Stefano Ermon -
2021 Poster: CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation »
Yusuke Tashiro · Jiaming Song · Yang Song · Stefano Ermon -
2021 Poster: Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience »
Dominic Gonschorek · Larissa Höfling · Klaudia P. Szatko · Katrin Franke · Timm Schubert · Benjamin Dunn · Philipp Berens · David Klindt · Thomas Euler -
2020 Poster: Improved Techniques for Training Score-Based Generative Models »
Yang Song · Stefano Ermon -
2020 Poster: System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina »
Cornelius Schröder · David Klindt · Sarah Strauss · Katrin Franke · Matthias Bethge · Thomas Euler · Philipp Berens -
2020 Poster: Efficient Learning of Generative Models via Finite-Difference Score Matching »
Tianyu Pang · Kun Xu · Chongxuan LI · Yang Song · Stefano Ermon · Jun Zhu -
2020 Spotlight: System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina »
Cornelius Schröder · David Klindt · Sarah Strauss · Katrin Franke · Matthias Bethge · Thomas Euler · Philipp Berens -
2020 Poster: Autoregressive Score Matching »
Chenlin Meng · Lantao Yu · Yang Song · Jiaming Song · Stefano Ermon -
2020 Poster: Diversity can be Transferred: Output Diversification for White- and Black-box Attacks »
Yusuke Tashiro · Yang Song · Stefano Ermon -
2019 Poster: MintNet: Building Invertible Neural Networks with Masked Convolutions »
Yang Song · Chenlin Meng · Stefano Ermon -
2019 Poster: Generative Modeling by Estimating Gradients of the Data Distribution »
Yang Song · Stefano Ermon -
2019 Oral: Generative Modeling by Estimating Gradients of the Data Distribution »
Yang Song · Stefano Ermon -
2019 Poster: Efficient Graph Generation with Graph Recurrent Attention Networks »
Renjie Liao · Yujia Li · Yang Song · Shenlong Wang · Will Hamilton · David Duvenaud · Raquel Urtasun · Richard Zemel -
2018 : Accepted papers »
Sven Gowal · Bogdan Kulynych · Marius Mosbach · Nicholas Frosst · Phil Roth · Utku Ozbulak · Simral Chaudhary · Toshiki Shibahara · Salome Viljoen · Nikita Samarin · Briland Hitaj · Rohan Taori · Emanuel Moss · Melody Guan · Lukas Schott · Angus Galloway · Anna Golubeva · Xiaomeng Jin · Felix Kreuk · Akshayvarun Subramanya · Vipin Pillai · Hamed Pirsiavash · Giuseppe Ateniese · Ankita Kalra · Logan Engstrom · Anish Athalye -
2018 : Adversarial Vision Challenge: Poster Session »
Yash Sharma · Lars Holdijk · Sascha Saralajew · Ziang Yan · Dmitrii Rashchenko · Iuliia Rashchenko · Jongseong Jang · Jungin Lee · jihyeun Yoon · KYUNGYUL KIM · Florian Laurent · Lukas Schott -
2018 Poster: Constructing Unrestricted Adversarial Examples with Generative Models »
Yang Song · Rui Shu · Nate Kushman · Stefano Ermon -
2017 : Poster Spotlights I »
Taesik Na · Yang Song · Aman Sinha · Richard Shin · Qiuyuan Huang · Nina Narodytska · Matt Staib · Kexin Pei · Fnu Suya · Amirata Ghorbani · Jacob Buckman · Matthias Hein · Huan Zhang · Yanjun Qi · Yuan Tian · Min Du · Dimitris Tsipras -
2017 Poster: Neural system identification for large populations separating “what” and “where” »
David Klindt · Alexander Ecker · Thomas Euler · Matthias Bethge