Timezone: »
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data. Their robustness as general approximators has been shown in a wide variety of data sources, with applications on image, sound, and 3D scene representation. However, little attention has been given to leveraging these architectures for the representation and analysis of time series data. In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed. Secondly, we propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset. We introduce an FFT-based loss to guide training so that all frequencies are preserved in the time series.We show that this network can be used to encode time series as INRs, and their embeddings can be interpolated to generate new time series from existing ones. We evaluate our generative method by using it for data augmentation, and show that it is competitive against current state-of-the-art approaches for augmentation of time series.
Author Information
Elizabeth Fons (J.P. Morgan Chase)
Alejandro Sztrajman (University College London)
Yousef El-Laham (J.P. Morgan Chase)
Alexandros Iosifidis (Aarhus University)
Svitlana Vyetrenko (J. P. Morgan, Artificial Intelligence Research)
More from the Same Authors
-
2021 : Efficient Calibration of Multi-Agent Market Simulators from Time Series with Bayesian Optimization »
Yuanlu Bai · Svitlana Vyetrenko · Henry Lam · Tucker Balch -
2022 : Attention-Augmented ST-GCN for Efficient Skeleton-based Human Action Recognition »
Negar Heidari · Alexandros Iosifidis -
2022 : A Synthetic Limit Order Book Dataset for Benchmarking Forecasting Algorithms under Distributional Shift »
Defu Cao · Yousef El-Laham · Loc Trinh · Svitlana Vyetrenko · Yan Liu -
2022 : Few-Shot Learnable Augmentation for Financial Time Series Prediction under Distribution Shifts »
Dat Huynh · Elizabeth Fons · Svitlana Vyetrenko -
2022 : Continual Transformers: Redundancy-Free Attention for Online Inference »
Lukas Hedegaard · Arian Bakhtiarnia · Alexandros Iosifidis -
2020 Workshop: Fair AI in Finance »
Senthil Kumar · Cynthia Rudin · John Paisley · Isabelle Moulinier · C. Bayan Bruss · Eren K. · Susan Tibbs · Oluwatobi Olabiyi · Simona Gandrabur · Svitlana Vyetrenko · Kevin Compher