State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory

Shida Wang · Beichen Xue

Great Hall & Hall B1+B2 (level 1) #825
[ ]
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST


State-space models have gained popularity in sequence modelling due to their simple and efficient network structures. However, the absence of nonlinear activation along the temporal direction limits the model's capacity. In this paper, we prove that stacking state-space models with layer-wise nonlinear activation is sufficient to approximate any continuous sequence-to-sequence relationship. Our findings demonstrate that the addition of layer-wise nonlinear activation enhances the model's capacity to learn complex sequence patterns. Meanwhile, it can be seen both theoretically and empirically that the state-space models do not fundamentally resolve the issue of exponential decaying memory. Theoretical results are justified by numerical verifications.

Chat is not available.