Timezone: »
Extracting informative representations of molecules using Graph neural networks (GNNs) is crucial in AI-driven drug discovery. Recently, the graph research community has been trying to replicate the success of self-supervised pretraining in natural language processing, with several successes claimed. However, we find the benefit brought by self-supervised pretraining on small molecular data can be negligible in many cases. We conduct thorough ablation studies on the key components of GNN pretraining, including pretraining objectives, data splitting methods, input features, pretraining dataset scales, and GNN architectures, to see how they affect the accuracy of the downstream tasks. Our first important finding is, self-supervised graph pretraining do not always have statistically significant advantages over non-pretraining methods in many settings. Secondly, although noticeable improvement can be observed with additional supervised pretraining, the improvement may diminish with richer features or more balanced data splits. Thirdly, hyper-parameters could have larger impacts on accuracy of downstream tasks than the choice of pretraining tasks, especially when the scales of downstream tasks are small. Finally, we provide our conjectures where the complexity of some pretraining methods on small molecules might be insufficient, followed by empirical evidences on different pretraining datasets.
Author Information
Ruoxi Sun (Google)
Hanjun Dai (Google Brain)
Adams Wei Yu (Google Brain)
More from the Same Authors
-
2020 : Session B, Poster 20: A Framework For Differentiable Discovery Of Graph Algorithms »
Hanjun Dai -
2022 : Annealed Training for Combinatorial Optimization on Graphs »
Haoran Sun · Etash Guha · Hanjun Dai -
2022 Workshop: New Frontiers in Graph Learning »
Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka -
2022 Poster: Optimal Scaling for Locally Balanced Proposals in Discrete Spaces »
Haoran Sun · Hanjun Dai · Dale Schuurmans -
2021 Poster: Towards understanding retrosynthesis by energy-based models »
Ruoxi Sun · Hanjun Dai · Li Li · Steven Kearnes · Bo Dai -
2021 Poster: Reverse engineering learned optimizers reveals known and novel mechanisms »
Niru Maheswaranathan · David Sussillo · Luke Metz · Ruoxi Sun · Jascha Sohl-Dickstein -
2020 : Poster Session B »
Ravichandra Addanki · Andreea-Ioana Deac · Yujia Xie · Francesco Landolfi · Antoine Prouvost · Claudius Gros · Renzo Massobrio · Abhishek Cauligi · Simon Alford · Hanjun Dai · Alberto Franzin · Nitish Kumar Panigrahy · Brandon Kates · Iddo Drori · Taoan Huang · Zhou Zhou · Marin Vlastelica · Anselm Paulus · Aaron Zweig · Minsu Cho · Haiyan Yin · Michal Lisicki · Nan Jiang · Haoran Sun -
2020 : Contributed Talk: A Framework For Differentiable Discovery Of Graph Algorithms »
Hanjun Dai -
2020 : Reverse engineering learned optimizers reveals known and novel mechanisms »
Niru Maheswaranathan · David Sussillo · Luke Metz · Ruoxi Sun · Jascha Sohl-Dickstein -
2020 Poster: Compositional Generalization via Neural-Symbolic Stack Machines »
Xinyun Chen · Chen Liang · Adams Wei Yu · Dawn Song · Denny Zhou -
2020 Poster: Differentiable Top-k with Optimal Transport »
Yujia Xie · Hanjun Dai · Minshuo Chen · Bo Dai · Tuo Zhao · Hongyuan Zha · Wei Wei · Tomas Pfister -
2020 Poster: Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration »
Hanjun Dai · Rishabh Singh · Bo Dai · Charles Sutton · Dale Schuurmans -
2019 Poster: Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models »
Ruoxi Sun · Ian Kinsella · Scott Linderman · Liam Paninski -
2019 Oral: Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models »
Ruoxi Sun · Ian Kinsella · Scott Linderman · Liam Paninski