Timezone: »

Physically-Constrained Adversarial Attacks on Brain-Machine Interfaces
Xiaying Wang · Rodolfo Octavio Siller Quintanilla · Michael Hersche · Luca Benini · Gagandeep Singh
Event URL: https://openreview.net/forum?id=oogi4S33q8 »

Deep learning (DL) has been widely employed in brain--machine interfaces (BMIs) to decode subjects' intentions based on recorded brain activities enabling direct interaction with machines. BMI systems play a crucial role in medical applications and have recently gained an increasing interest as consumer-grade products. Failures in such systems might cause medical misdiagnoses, physical harm, and financial loss. Especially with the current market boost of such devices, it is of utmost importance to analyze and understand in-depth, potential malicious attacks to develop countermeasures and avoid future damages. This work presents the first study that analyzes and models adversarial attacks based on physical domain constraints in DL-based BMIs. Specifically, we assess the robustness of EEGNet which is the current state-of-the-art network embedded in a real-world, wearable BMI. We propose new methods that incorporate domain-specific insights and constraints to design natural and imperceptible attacks and to realistically model signal propagation over the human scalp. Our results show that EEGNet is significantly vulnerable to adversarial attacks with an attack success rate of more than 50%.

Author Information

Xiaying Wang (ETH Zurich)
Rodolfo Octavio Siller Quintanilla (University of Zurich and ETHZ)
Michael Hersche (Swiss Federal Institute of Technology)
Luca Benini (ETH Zurich)
Gagandeep Singh (University of Illinois, Urbana Champaign)

More from the Same Authors