`

Timezone: »

 
Poster
On the Hardness of Robust Classification
Pascale Gourdeau · Varun Kanade · Marta Kwiatkowska · James Worrell

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #225
It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We show moreover that the class of monotone conjunctions cannot be robustly learned under the uniform distribution against an adversary who can perturb $\omega(\log n)$ input bits. However if the adversary is restricted to perturbing $O(\log n)$ bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution). Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning problem in the PAC framework.

Author Information

Pascale Gourdeau (University of Oxford)
Varun Kanade (University of Oxford)
Marta Kwiatkowska (University of Oxford)
James Worrell (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2021 : Research panel »
    David Dunson · Marta Kwiatkowska · Steve MacEachern · Jeffrey Miller · Briana Joy Stephenson
  • 2020 Poster: Adaptive Reduced Rank Regression »
    Qiong Wu · Felix MF Wong · Yanhua Li · Zhenming Liu · Varun Kanade
  • 2019 : Break / Poster Session 1 »
    Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia (Shirley) Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege (Gail) Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Anu Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang
  • 2019 Poster: Implicit Regularization for Optimal Sparse Recovery »
    Tomas Vaskevicius · Varun Kanade · Patrick Rebeschini
  • 2019 Poster: Decentralized Cooperative Stochastic Bandits »
    David Martínez-Rubio · Varun Kanade · Patrick Rebeschini
  • 2018 : Safety verification for neural networks with provable guarantees by Marta Kwiatkowska »
    Marta Kwiatkowska