Skip to yearly menu bar Skip to main content


Poster

SDformer: Similarity-driven Discrete Transformer For Time Series Generation

Zhicheng Chen · FENG SHIBO · Zhong Zhang · Xi Xiao · Xingyu Gao · Peilin Zhao

East Exhibit Hall A-C #4301
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The superior generation capabilities of Denoised Diffusion Probabilistic Models (DDPMs) have been effectively showcased across a multitude of domains. Recently, the application of DDPMs has extended to time series generation tasks, where they have significantly outperformed other deep generative models, often by a substantial margin. However, we have discovered two main challenges with these methods: 1) the inference time is excessively long; 2) there is potential for improvement in the quality of the generated time series. In this paper, we propose a method based on discrete token modeling technique called Similarity-driven Discrete Transformer (SDformer). Specifically, SDformer utilizes a similarity-driven vector quantization method for learning high-quality discrete token representations of time series, followed by a discrete Transformer for data distribution modeling at the token level. Comprehensive experiments show that our method significantly outperforms competing approaches in terms of the generated time series quality while also ensuring a short inference time. Furthermore, without requiring retraining, SDformer can be directly applied to predictive tasks and still achieve commendable results.

Live content is unavailable. Log in and register to view live content