Continual Learning for Particle Accelerators
Abstract
Particle accelerators operate under dynamically changing conditions, which often lead to data distribution drifts. These drifts pose significant challenges for Machine Learning (ML) models, which typically fail to maintain performance when faced with such non-stationary data. In particle accelerators, the primary sources of data drifts include changes in accelerator settings and non-measured parameters such as machine degradation. Previous research has proposed conditional models to handle multiple beam configurations effectively; however, it is challenging to train ML models on all possible configuration settings. Additionally, conditional models alone can not address performance degradation caused by drifts due to non-measured factors. These limitations contribute to a significant gap between ML development and its deployment in real-world operational settings. To bridge this gap, in this paper, we identify some of the key areas within particle accelerators where continual learning can help mitigate drift-induced performance degradation. In addition, we present a real use case where memory-based continual learning has been employed to demonstrate stable performance on conditional Auto-Encoder model when switching between different beam settings.