Demo: Agentic AI Makes Real-Time Neural Data Conversational for Epilepsy Patients
Abstract
Research Overview
We have developed and deployed an interactive, agentic AI system in the Penn Medicine epilepsy monitoring unit (EMU) that creates a high-resolution digital phenotype by fusing continuous biosignal data with a patient's subjective experience. The core innovation is a two-way dialogue, facilitated by a large language model (LLM), which enables real-time interaction between a patient and their own physiological data.
In our implementation, continuous electroencephalography (EEG) data is acquired from patients in the EMU. This high-bandwidth data is streamed in real-time to a secure, HIPAA-compliant cloud infrastructure. Within a scalable cloud-computing environment, automated analysis pipelines continuously process the incoming data to extract a suite of clinically relevant biomarkers, including seizure and interictal spike detections, sleep architecture, and brain synchrony measures. The neurophysiological data infrastructure is synchronously coupled with a patient-facing web application that provides a conversational chat interface. This system is bidirectional: it can 1) initiate messages to a patient about their symptoms or well-being when the analysis engine detects a significant physiological event (e.g., a possible seizure), and 2) answer patient’s natural language questions about their health status (e.g., “Did I have any seizures in my sleep last night?”), by querying and synthesizing their data using a ReAct agent framework.
We have deployed the system in the EMU with an initial patient cohort (n=7) undergoing long-term clinical monitoring. We have achieved end-to-end system latencies of < 3 minutes, verified real-time data throughput for multiple patients simultaneously, streamed and processed 25+ days of continuous EEG data, and recorded 400+ patient-LLM messages. While successfully demonstrated in epilepsy, this is a generalizable framework designed to be adapted to other chronic conditions where physiology and behavior are intertwined.
Demo Experience
For our interactive demo, participants will assume the role of patients and interact directly with our live system. Each attendee will receive a temporary account to experience how foundation models make complex neurophysiological data accessible and actionable. They will be placed in a simulated patient session with both a pre-loaded continuous EEG dataset and a real-time data stream, as though it is being actively recorded. Attendees will be able to test all of the system's core capabilities, including submitting natural language queries about their data, conversing with the specialized LLM, and responding to physiologically-triggered events. They will observe how the agentic system processes their request, executes analyses through the tool-use architecture, and synthesizes responses grounded in actual physiological data. We will also provide a behind-the-scenes look at the technical architecture: demonstrating how data streams are processed in real time by our cloud infrastructure, how we use patient annotations to update our predictive models over time, and how the agent’s reasoning chain—including its planning, tool calls, and final response—unfolds.