Timezone: »
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems. It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction. However, most smoothing methods do not give us any information about the \emph{confidence} with which the underlying classifier (e.g., deep neural network) makes a prediction. In this work, we propose a method to generate certified radii for the prediction confidence of the smoothed classifier. We consider two notions for quantifying confidence: average prediction score of a class and the margin by which the average prediction score of one class exceeds that of another. We modify the Neyman-Pearson lemma (a key theorem in randomized smoothing) to design a procedure for computing the certified radius where the confidence is guaranteed to stay above a certain threshold. Our experimental results on CIFAR-10 and ImageNet datasets show that using information about the distribution of the confidence scores allows us to achieve a significantly better certified radius than ignoring it. Thus, we demonstrate that extra information about the base classifier at the input point can help improve certified guarantees for the smoothed classifier. Code for the experiments is available at \url{https://github.com/aounon/cdf-smoothing}.
Author Information
Aounon Kumar (University of Maryland, College Park)
Alexander Levine (University of Maryland, College Park)
Soheil Feizi (University of Maryland)
Tom Goldstein (University of Maryland)
More from the Same Authors
-
2020 : An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process »
David Tran · Alex Valtchanov · Keshav R Ganapathy · Raymond Feng · Eric Slud · Micah Goldblum · Tom Goldstein -
2021 : Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations »
Harrison Foley · Liam Fowl · Tom Goldstein · Gavin Taylor -
2021 Poster: Improving Deep Learning Interpretability by Saliency Guided Training »
Aya Abdelsalam Ismail · Hector Corrada Bravo · Soheil Feizi -
2020 : The Intrinsic Dimension of Images and Its Impact on Learning »
Chen Zhu · Micah Goldblum · Ahmed Abdelkader · Tom Goldstein · Phillip Pope -
2020 : Opening Remarks »
Reinhard Heckel · Paul Hand · Soheil Feizi · Lenka Zdeborová · Richard Baraniuk -
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo Angel · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein -
2020 Poster: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach »
Micah Goldblum · Liam Fowl · Tom Goldstein -
2020 Poster: Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation »
Yogesh Balaji · Rama Chellappa · Soheil Feizi -
2020 Poster: MetaPoison: Practical General-purpose Clean-label Data Poisoning »
W. Ronny Huang · Jonas Geiping · Liam Fowl · Gavin Taylor · Tom Goldstein -
2020 Poster: Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks »
Wei-An Lin · Chun Pong Lau · Alexander Levine · Rama Chellappa · Soheil Feizi -
2020 Poster: Benchmarking Deep Learning Interpretability in Time Series Predictions »
Aya Abdelsalam Ismail · Mohamed Gunady · Hector Corrada Bravo · Soheil Feizi -
2020 Poster: Certifying Strategyproof Auction Networks »
Michael Curry · Ping-yeh Chiang · Tom Goldstein · John Dickerson -
2020 Poster: (De)Randomized Smoothing for Certifiable Defense against Patch Attacks »
Alexander Levine · Soheil Feizi -
2019 : Soheil Feizi, "Certifiable Defenses against Adversarial Attacks" »
Soheil Feizi -
2019 : Break / Poster Session 1 »
Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang -
2019 Poster: Functional Adversarial Attacks »
Cassidy Laidlaw · Soheil Feizi -
2019 Poster: Quantum Wasserstein Generative Adversarial Networks »
Shouvanik Chakrabarti · Huang Yiming · Tongyang Li · Soheil Feizi · Xiaodi Wu -
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein -
2019 Poster: Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks »
Aya Abdelsalam Ismail · Mohamed Gunady · Luiz Pessoa · Hector Corrada Bravo · Soheil Feizi -
2018 Poster: Porcupine Neural Networks: Approximating Neural Network Landscapes »
Soheil Feizi · Hamid Javadi · Jesse Zhang · David Tse -
2018 Poster: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks »
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein -
2018 Poster: Visualizing the Loss Landscape of Neural Nets »
Hao Li · Zheng Xu · Gavin Taylor · Christoph Studer · Tom Goldstein -
2017 Poster: Tensor Biclustering »
Soheil Feizi · Hamid Javadi · David Tse -
2017 Poster: Training Quantized Nets: A Deeper Understanding »
Hao Li · Soham De · Zheng Xu · Christoph Studer · Hanan Samet · Tom Goldstein -
2015 : Spotlight »
Furong Huang · William Gray Roncal · Tom Goldstein -
2015 Poster: Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing »
Tom Goldstein · Min Li · Xiaoming Yuan -
2014 Poster: Biclustering Using Message Passing »
Luke O'Connor · Soheil Feizi