Skip to yearly menu bar Skip to main content


Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control

Giorgos ('Yorgos') Mamakoukas · Orest Xherija · Todd Murphey

Poster Session 5 #1630

Keywords: [ Non-Convex Optimization ] [ Optimization ] [ Stochastic Optimization ]

Abstract: Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach---in contrast to current methods for learning stable LDSs---updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an \textit{orders-of-magnitude} improvement in reconstruction error and superior results in terms of control performance. In addition, it is \textit{provably} more memory efficient, with an $\mathcal{O}(n^2)$ space complexity compared to $\mathcal{O}(n^4)$ of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at

Chat is not available.