Timezone: »

Robust Learning against Relational Adversaries
Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #332
Test-time adversarial attacks have posed serious challenges to the robustness of machine-learning models, and in many settings the adversarial perturbation need not be bounded by small $\ell_p$-norms. Motivated by attacks in program analysis and security tasks, we investigate $\textit{relational adversaries}$, a broad class of attackers who create adversarial examples in a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness against relational adversaries and investigate different levels of robustness-accuracy trade-off due to various patterns in a relation. Inspired by the insights, we propose $\textit{normalize-and-predict}$, a learning framework that leverages input normalization to achieve provable robustness. The framework solves the pain points of adversarial training against relational adversaries and can be combined with adversarial training for the benefits of both approaches. Guided by our theoretical findings, we apply our framework to source code authorship attribution and malware detection. Results of both tasks show our learning framework significantly improves the robustness of models against relational adversaries. In the process, it outperforms adversarial training, the most noteworthy defense mechanism, by a wide margin.

Author Information

Yizhen Wang (VISA)
Mohannad Alhanahnah (University of Wisconsin-Madison)
Xiaozhu Meng (Rice University)
Ke Wang (Visa Research)
Mihai Christodorescu (Google)
Somesh Jha (University of Wisconsin, Madison)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors