Timezone: »

 
Reducing Down(stream)time: Pretraining Molecular GNNs using Heterogeneous AI Accelerators
Jenna A Bilbrey · Kristina Herman · Henry Sprueill · Sotiris Xantheas · Payel Das · Manuel Lopez Roldan · Mike Kraus · Hatem Helal · Sutanay Choudhury

Recent advancements in self-supervised learning and transfer learning methods have popularized approaches that involve pretraining models from massive data sources and subsequent finetuning of such models towards a specific task. While such approaches have become the norm in fields such as natural language processing, implementation and evaluation of transfer learning approaches for chemistry are in the early stages. In this work, we demonstrate finetuning for downstream tasks on a graph neural network (GNN) trained over a molecular database containing 2.7 million water clusters. The use of Graphcore IPUs as an AI accelerator for training molecular GNNs reduces training time from a reported 2.7 days on 0.5M clusters to 92 minutes on 2.7M clusters. Finetuning the pretrained model for downstream tasks of molecular dynamics and level-of-theory transfer took only 8.3 hours and 28 minutes, respectively, on a single GPU.

Author Information

Jenna A Bilbrey (Pacific Northwest National Laboratory)
Kristina Herman (University of Washington)
Henry Sprueill (Pacific Northwest National Laboratory)
Sotiris Xantheas (Pacific Northwest National Laboratory)
Payel Das (IBM Research)
Manuel Lopez Roldan (Graphcore)
Mike Kraus (Graphcore)
Hatem Helal (Graphcore)
Sutanay Choudhury (Pacific Northwest National Laboratory)

More from the Same Authors