EEG Foundation Models: A Critical Review of Current Progress and Future Directions
Abstract
Electroencephalography (EEG) signals offer immense value in scientific and clinical investigations. In recent years, self-supervised EEG foundation models (EEG-FMs) have presented a viable path towards the robust and scalable extraction of EEG features. However, the real-world readiness of early EEG-FMs and the rubrics for long-term research progress remain unclear. This study conducts a critical review of ten early, first-generation EEG-FMs based on a) the representation of raw input data, b) self-supervised representation learning, and c) evaluation strategy. We synthesize key EEG-FM methodological trends, empirical findings, and remaining gaps. We find that EEG-FMs draw heavily from their counterparts in the language and vision domains for their model architecture and self-supervision. However, EEG-FM evaluations remain heterogeneous and largely limited, making it challenging to assess their practical off-the-shelf utility. In addition to adopting standardized and realistic evaluations, future efforts should demonstrate substantial scaling effects and make principled and trustworthy choices throughout the EEG-FM pipeline. We believe that the development of benchmarks, software tools, technical methodologies, and clinical/scientific applications in collaboration with domain experts may advance real-world adoption of EEG-FMs.