Timezone: »

Synergy-of-Experts: Collaborate to Improve Adversarial Robustness
Sen Cui · Jingfeng ZHANG · Jian Liang · Bo Han · Masashi Sugiyama · Changshui Zhang

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #206

Learning adversarially robust models require invariant predictions to a small neighborhood of its natural inputs, often encountering insufficient model capacity. There is research showing that learning multiple sub-models in an ensemble could mitigate this insufficiency, further improving the generalization and the robustness. However, the ensemble's voting-based strategy excludes the possibility that the true predictions remain with the minority. Therefore, this paper further improves the ensemble through a collaboration scheme---Synergy-of-Experts (SoE). Compared with the voting-based strategy, the SoE enables the possibility of correct predictions even if there exists a single correct sub-model. In SoE, every sub-model fits its specific vulnerability area and reserves the rest of the sub-models to fit other vulnerability areas, which effectively optimizes the utilization of the model capacity. Empirical experiments verify that SoE outperforms various ensemble methods against white-box and transfer-based adversarial attacks.

Author Information

Sen Cui (Tsinghua University)
Jian Liang (Alibaba Group)

Jian Liang received his Ph.D. degree from Tsinghua University, Beijing, China, in 2018. During 2018 and 2020 he was a senior researcher in the Wireless Security Products Department of the Cloud and Smart Industries Group at Tencent, Beijing. In 2020 he joined the AI for international Department, New Retail Intelligence Engine, Alibaba Group as a senior algorithm engineer. His paper received the Best Short Paper Award in 2016 IEEE International Conference on Healthcare Informatics (ICHI).

Masashi Sugiyama (RIKEN / University of Tokyo)
Changshui Zhang (Tsinghua University)

More from the Same Authors