Timezone: »
Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied. In this paper, we show that existing metric learning algorithms, which focus on boosting the clean accuracy, can result in metrics that are less robust than the Euclidean distance. To overcome this problem, we propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations, and the robustness of the resulting model is certifiable. Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors (errors under adversarial attacks). Furthermore, unlike neural network defenses which usually encounter a trade-off between clean and robust errors, our method does not sacrifice clean errors compared with previous metric learning methods.
Author Information
Lu Wang (Nanjing University & JD.com)
Xuanqing Liu (University of California, Los Angeles)
Jinfeng Yi (JD Research)
Yuan Jiang (National Key lab for Novel Software Technology)
Cho-Jui Hsieh (UCLA)
More from the Same Authors
-
2020 Poster: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond »
Kaidi Xu · Zhouxing Shi · Huan Zhang · Yihan Wang · Kai-Wei Chang · Minlie Huang · Bhavya Kailkhura · Xue Lin · Cho-Jui Hsieh -
2020 Poster: Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data »
Utkarsh Ojha · Krishna Kumar Singh · Cho-Jui Hsieh · Yong Jae Lee -
2020 Poster: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Spotlight: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Poster: An Efficient Adversarial Attack for Tree Ensembles »
Chong Zhang · Huan Zhang · Cho-Jui Hsieh -
2020 Poster: Multi-Stage Influence Function »
Hongge Chen · Si Si · Yang Li · Ciprian Chelba · Sanjiv Kumar · Duane Boning · Cho-Jui Hsieh -
2019 Poster: Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers »
Liwei Wu · Shuqing Li · Cho-Jui Hsieh · James Sharpnack -
2019 Poster: A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks »
Hadi Salman · Greg Yang · Huan Zhang · Cho-Jui Hsieh · Pengchuan Zhang -
2019 Poster: DTWNet: a Dynamic Time Warping Network »
Xingyu Cai · Tingyang Xu · Jinfeng Yi · Junzhou Huang · Sanguthevar Rajasekaran -
2019 Poster: Robustness Verification of Tree-based Models »
Hongge Chen · Huan Zhang · Si Si · Yang Li · Duane Boning · Cho-Jui Hsieh -
2019 Poster: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Spotlight: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Poster: A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning »
Xuanqing Liu · Si Si · Jerry Zhu · Yang Li · Cho-Jui Hsieh