Decoding Silence through Neural Estimation in Stochastic Dynamical Systems
Abstract
Accurate remote state estimation is a fundamental component of many autonomous and networked dynamical systems, where multiple decision-making agents (for instance, a scheduler and an estimator) interact and communicate over shared, bandwidth-constrained channels. These communication constraints introduce an additional layer of complexity, namely, the decision of when to communicate. This results in a fundamental trade-off between estimation accuracy and communication resource usage. Traditional extensions of classical estimation algorithms (e.g., the Kalman filter) treat the absence of communication as ‘missing’ information. However, silence itself can carry implicit information about the system’s state, which, if properly interpreted, can enhance the estimation quality even in the absence of explicit communication. Leveraging this implicit structure, however, poses significant analytical challenges, even in relatively simple systems. In this paper, we propose CALM (Communication-Aware Learning and Monitoring), a novel learning-based framework that jointly addresses the dual challenges of communication scheduling and estimator design. Our approach entails learning not only when to communicate but also how to infer useful information from periods of communication silence. Our case study on a control system benchmark demonstrates that CALM is able to decode the implicit coordination between the estimator and the scheduler to extract information from the instances of ‘silence’ and enhance the estimation accuracy.