Demo: ezmsg and LSL: Prototyping an iBCI from modular components
Abstract
We will provide an interactive demonstration of a modern approach to rapid prototyping of implantable brain computer interfaces (iBCI) that bridges practical implantable neural interface development and foundation model research. Attendees will experience a complete BCI development workflow using ezmsg and Lab Streaming Layer (LSL) - tools designed to be approachable for neuroengineers while maintaining compatibility with familiar Python libraries and frameworks.
Data Sources: Attendees will have the option to launch and interact with multiple data sources. The LSL ecosystem provides access to more than 100 devices; we will use HID input, microphone, and a simple motion capture device. We also support playback of previously recorded data using either LSL applications or specific ezmsg source modules. Finally, we will have simulated and real Blackrock Neurotech data sources with 100s of channels at 30 kHz.
Pipeline Construction: Using ezmsg's node-based architecture, participants will build a directed acyclic graph (DAG) to perform signal processing and inference. The signal processing pipeline will be built from pre-existing modules in the ezmsg-sigproc package. Participants with knowledge of Python will have the opportunity to make custom modules. We will describe the AxisArray common message structure among ezmsg-sigproc nodes. The signal processing pipeline should transform raw data into useful features with sub-10ms processing latency.
Pytorch Model Integration: The demonstration features decoders from the ezmsg-learn Python package that leverages river, sklearn, and pytorch packages and provides the ability to specify arbitrary models and checkpoints. We will provide pre-trained models for speech decoding and cursor control. For decoders with partial-update functionality, we will demonstrate how to segment and label data while accounting for clock discrepancies, and to use these segments to update model parameters while the system is running. Decoder output is published to consumers via LSL.
Real-time Visualization and Monitoring: The system provides live visualizations of graph structure, data stream contents, and performance metrics.
Reusable Module Design: A key design principle of the ezmsg ecosystem is that signal processing and machine learning modules are designed for reuse across both online and offline contexts. This architectural decision ensures that identical algorithms are used during offline analysis, online prototyping, and eventual clinical validation - eliminating discrepancies that often plague traditional BCI development workflows and supporting rigorous verification and validation during clinical translation. We will show how to reuse ezmsg modules in a Jupyter notebook or in tests.
Practical Learning Outcomes: Workshop attendees will gain hands-on experience with tools that enable immediate BCI prototyping using familiar Python libraries, understanding of real-time processing constraints in implantable systems, and insight into how foundation models can address generalization challenges in neural interface development. Participants will leave with practical knowledge of building modular, testable BCI systems that maintain research flexibility while supporting the reproducibility and validation requirements of clinical translation.