Sliced‑Wasserstein Importance Weighting for Robust Brain-Computer Interface Speech Decoding
Abstract
Brain–computer interfaces (BCIs) hold transformative potential, but their performance often degrades across sessions due to signal drift and calibration challenges. In this paper, we propose a method to improve cross-session robustness by reweighting training data according to their similarity to the target session, as measured with the Sliced-Wasserstein distance. We provide theoretical justification for this approach in a simplified statistical model, and we evaluate it on real BCI data. Our results show that Sliced-Wasserstein weighting improves BCI performance by reducing phoneme error rate from 0.296 to 0.169 (a 42.9\% reduction) on the first post-training session, and it maintains nearly the same level of performance over the following three sessions. Our results suggest that distributionally informed reweighting offers a principled and fully unsupervised way to mitigate session-to-session variability in BCIs, paving the way toward more reliable long-term neural decoding without the need for costly recalibration.