Prototyping Co-Control Brain–Computer Interfaces Through Brain-to-Text
Abstract
Co-control between brain–computer interface (BCI) users and intelligent systems requires effective fusion across specialized modules. In Brain-to-Text BCIs, neural decoders (NDs) map neural activity to text token sequences, while language models (LMs) provide compensatory linguistic constraints when ND predictions are uncertain. Integration is typically achieved through probabilistic fusing, yet current systems are poorly calibrated: they encode some notion of confidence in the output distribution but fail to discriminate reliably between correct and incorrect predictions. Through oracle manipulations of the predicted probability distribution, while keeping the same MLE solution, across over-confident, uncertainty-aware, and alternative-rich regimes, we demonstrate that a better calibrated system can substantially improve performance. These results highlight the need for neural decoders to communicate both uncertainty and informative alternatives in order to enable robust multi-module co-control.