Timezone: »
Recurrent neural networks (RNNs) with continuous-time hidden states are a natural fit for modeling irregularly-sampled time series. These models, however, face difficulties when the input data possess long-term dependencies. We show that similar to standard RNNs, the underlying reason for this issue is the vanishing or exploding of the gradient during training. This phenomenon is expressed by the ordinary differential equation (ODE) representation of the hidden state, regardless of the ODE solver's choice. We provide a solution by equipping arbitrary continuous-time networks with a memory compartment separated from their time-continuous state. This way, we encode a continuous-time dynamic flow within the RNN, allowing it to respond to inputs arriving at arbitrary time lags while ensuring a constant error propagation through the memory path. We call these models Mixed-Memory-RNNs (mmRNNs). We experimentally show that Mixed-Memory-RNNs outperform recently proposed RNN-based counterparts on non-uniformly sampled data with long-term dependencies.
Author Information
Mathias Lechner (MIT)
Ramin Hasani (MIT | Vanguard)
More from the Same Authors
-
2022 : PyHopper - A Plug-and-Play Hyperparameter Optimization Engine »
Mathias Lechner · Ramin Hasani · Sophie Neubauer · Philipp Neubauer · Daniela Rus -
2022 : Are All Vision Models Created Equal? A Study of the Open-Loop to Closed-Loop Causality Gap »
Mathias Lechner · Ramin Hasani · Alexander Amini · Tsun-Hsuan Johnson Wang · Thomas Henzinger · Daniela Rus -
2022 : Infrastructure-based End-to-End Learning and Prevention of Driver Failure »
Noam Buckman · Shiva Sreeram · Mathias Lechner · Yutong Ban · Ramin Hasani · Sertac Karaman · Daniela Rus -
2022 : Infrastructure-based End-to-End Learning and Prevention of Driver Failure »
Noam Buckman · Shiva Sreeram · Mathias Lechner · Yutong Ban · Ramin Hasani · Sertac Karaman · Daniela Rus -
2022 Poster: Efficient Dataset Distillation using Random Feature Approximation »
Noel Loo · Ramin Hasani · Alexander Amini · Daniela Rus -
2022 Poster: Evolution of Neural Tangent Kernels under Benign and Adversarial Training »
Noel Loo · Ramin Hasani · Alexander Amini · Daniela Rus -
2021 Poster: Sparse Flows: Pruning Continuous-depth Models »
Lucas Liebenwein · Ramin Hasani · Alexander Amini · Daniela Rus -
2021 Poster: Causal Navigation by Continuous-time Neural Networks »
Charles Vorbach · Ramin Hasani · Alexander Amini · Mathias Lechner · Daniela Rus -
2017 : Openning Remarks »
Ramin Hasani -
2017 Workshop: Workshop on Worm's Neural Information Processing (WNIP) »
Ramin Hasani · Manuel Zimmer · Stephen Larson · Tomas Kazmar · Radu Grosu