Poster
Re-examination of the Role of Latent Variables in Sequence Modeling
Guokun Lai · Zihang Dai · Yiming Yang · Shinjae Yoo
East Exhibition Hall B, C #52
Keywords: [ Density Estimation ] [ Algorithms ] [ Recurrent Networks ] [ Algorithms -> Stochastic Methods; Deep Learning ]
With latent variables, stochastic recurrent models have achieved state-of-the-art performance in modeling sound-wave sequence. However, opposite results are also observed in other domains, where standard recurrent networks often outperform stochastic models. To better understand this discrepancy, we re-examine the roles of latent variables in stochastic recurrent models for speech density estimation. Our analysis reveals that under the restriction of fully factorized output distribution in previous evaluations, the stochastic variants were implicitly leveraging intra-step correlation but the deterministic recurrent baselines were prohibited to do so, resulting in an unfair comparison. To correct the unfairness, we remove such restriction in our re-examination, where all the models can explicitly leverage intra-step correlation with an auto-regressive structure. Over a diverse set of univariate and multivariate sequential data, including human speech, MIDI music, handwriting trajectory, and frame-permuted speech, our results show that stochastic recurrent models fail to deliver the performance advantage claimed in previous work. %exhibit any practical advantage despite the claimed theoretical superiority. In contrast, standard recurrent models equipped with an auto-regressive output distribution consistently perform better, dramatically advancing the state-of-the-art results on three speech datasets.
Live content is unavailable. Log in and register to view live content