Skip to yearly menu bar Skip to main content

Workshop: All Things Attention: Bridging Different Perspectives on Attention

First De-Trend then Attend: Rethinking Attention for Time-Series Forecasting

Xiyuan Zhang · Xiaoyong Jin · Karthick Gopalswamy · Gaurav Gupta · Youngsuk Park · Xingjian Shi · Hao Wang · Danielle Maddix · Yuyang (Bernie) Wang

Keywords: [ attention relationship ] [ seasonal-trend decomposition ] [ frequency attention ] [ Time Series Forecasting ]


Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noises, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.

Chat is not available.