Skip to yearly menu bar Skip to main content


Poster

Injecting Undetectable Backdoors in Deep Learning and Language Models

Alkis Kalavasis · Amin Karbasi · Argyris Oikonomou · Katerina Sotiraki · Grigoris Velegkas · Manolis Zampetakis

West Ballroom A-D #6704
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

As ML models become increasingly complex and integral to high-stakes domains such as finance and healthcare, they also become more susceptible to sophisticated adversarial attacks. We investigate the threat posed by undetectable backdoors in models developed by insidious external expert firms. When such backdoors exist, they allow the designer of the model to sell information to the users on how to carefully perturb the least significant bits of their input to change the classification outcome to a favorable one. We develop a general strategy to plant a backdoor to neural networks while ensuring that even if the model’s weights and architecture are accessible, the existence of the backdoor is still undetectable. To achieve this, we utilize techniques from cryptography such as cryptographic signatures and indistinguishability obfuscation. We further introduce the notion of backdoors to language models and extend our neural network backdoor attacks to generative models based on the existence of steganographic functions.

Live content is unavailable. Log in and register to view live content