Large Language Model-based Bayesian Optimization for Tokamak Stabilization
Abstract
Nuclear fusion holds the potential to solve many of today's most pressing problems, from climate change to mass food and water production. However, modeling and operating tokamaks remain highly challenging due to distribution shifts that take place, e.g., due to hardware changes between experiments, actuator failures, and impurities in the plasma. In this work, we focus particularly on the task of predicting and mitigating tearing instabilities, which can cause the plasma to disrupt and potentially damage the tokamak. To do so, we propose a large language model-informed Bayesian optimization scheme, which aims to explore and find highly stable electron cyclotron heating configurations efficiently. Our choice of algorithm allows us to account for uncertainty in the model, which in turn helps it adapt to changes that arise due to distribution shifts. The large language model, by contrast, allows us to leverage high-dimensional prior data and to process experiment logs written by scientists and operators, which would usually be impossible with conventional Bayesian optimization tools. In preliminary offline experiments, conducted using a historical dataset from the DIII-D tokamak, our method shows promising performance compared to other baselines.