Skip to yearly menu bar Skip to main content


Workshop

Backdoors in Deep Learning: The Good, the Bad, and the Ugly

Khoa D Doan · Aniruddha Saha · Anh Tran · Yingjie Lao · Kok-Seng Wong · Ang Li · HARIPRIYA HARIKUMAR · Eugene Bagdasaryan · Micah Goldblum · Tom Goldstein

Room 203 - 205

Fri 15 Dec, 7 a.m. PST

Deep neural networks (DNNs) are revolutionizing almost all AI domains and have become the core of many modern AI systems. While having superior performance compared to classical methods, DNNs are also facing new security problems, such as adversarial and backdoor attacks, that are hard to discover and resolve due to their black-box-like property. Backdoor attacks, particularly, are a brand-new threat that was only discovered in 2017 but has gained attention quickly in the research community. The number of backdoor-related papers grew from 21 to around 110 after only one year (2019-2020). In 2022 alone, there were more than 200 papers on backdoor learning, showing a high research interest in this domain.Backdoor attacks are possible because of insecure model pretraining and outsourcing practices. Due to the complexity and the tremendous cost of collecting data and training models, many individuals/companies just employ models or training data from third parties. Malicious third parties can add backdoors into their models or poison their released data before delivering it to the victims to gain illegal benefits. This threat seriously damages the safety and trustworthiness of AI development. Lately, many studies on backdoor attacks and defenses have been conducted to prevent this critical vulnerability.While most works consider backdoor ``evil'', some studies exploit it for good purposes. A popular approach is to use the backdoor as a watermark to detect illegal use of commercialized data/models. A few works employ the backdoor as a trapdoor for adversarial defense. Learning the working mechanism of backdoor also elevates a deeper understanding of how deep learning models work.This workshop is designed to provide a comprehensive understanding of the current state of backdoor research. We also want to raise awareness of the AI community on this important security problem, and motivate researchers to build safe and trustful AI systems.

Chat is not available.
Timezone: America/Los_Angeles

Schedule