Recent Advances in Time Series Foundation Models: Have We Reached the ‘BERT Moment’?
Abstract
Foundation models (FMs) have achieved great success in NLP and vision, inspiring over 20 new time series FMs (TSFMs) in the past year. Despite promising results, studies show that carefully designed lightweight supervised baselines often match TSFM performance. Unlike NLP’s “BERT Moment,” TSFMs still require full fine-tuning to be competitive in real-world scenarios. Additionally, some tabular FMs rival TSFMs without being time series-specific. Recent benchmarks also provide mixed evidence: GIFT-Eval favors TSFMs, OpenTS shows statistical models outperforming deep learning on univariate data, and FoundTS finds supervised baselines on par with TSFMs. This workshop aims to bring together researchers to examine the gap between TSFM potential and real-world utility, and to identify benchmarks and applications where TSFMs can truly excel.
The key topics of this workshop include, but are not limited to:
- Benchmarking Foundation Models in Time Series,
- Scaling Laws and Efficiency in Time Series Models,
- Evaluating Transferability and Adaptability of Foundation Models,
- Leveraging Foundation Models of Other Modalities for Time Series,
- Unsupervised performance estimation of TSFMs,
- Industrial Benchmarking of Time Series Foundation Models
More details are provided in our Call for Papers.