Timezone: »
Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treats large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational cost.
Author Information
Balasubramaniam Srinivasan (Amazon)
Vassilis Ioannidis (University of Minnesota, Minneapolis)
Soji Adeshina (University of California Berkeley)
Mayank Kakodkar (Purdue University)
I am a Research Assistant and Doctoral Student at Purdue University, West Lafayette, Indiana. I work with Prof. Bruno Ribeiro on topics related to Generative Models and MCMC. The rest of my information is listed on my Linkedin profile.
George Karypis (University of Minnesota, Minneapolis)
Bruno Ribeiro (Purdue)
More from the Same Authors
-
2022 Poster: OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs »
Yangze Zhou · Gitta Kutyniok · Bruno Ribeiro -
2022 : Conditional Invariances for Conformer Invariant Protein Representations »
Balasubramaniam Srinivasan · Vassilis Ioannidis · Soji Adeshina · Mayank Kakodkar · George Karypis · Bruno Ribeiro -
2022 : Differentially Private Bias-Term only Fine-tuning of Foundation Models »
Zhiqi Bu · Yu-Xiang Wang · Sheng Zha · George Karypis -
2022 : Variational Causal Inference »
Yulun Wu · Layne Price · Zichen Wang · Vassilis Ioannidis · Rob Barton · George Karypis -
2022 : Sequence-Graph Duality: Unifying User Modeling with Self-Attention for Sequential Recommendation »
Zeren Shui · Ge Liu · Anoop Deoras · George Karypis -
2022 : Contributed Talk: Differentially Private Bias-Term only Fine-tuning of Foundation Models »
Zhiqi Bu · Yu-Xiang Wang · Sheng Zha · George Karypis -
2022 Spotlight: OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs »
Yangze Zhou · Gitta Kutyniok · Bruno Ribeiro -
2022 Spotlight: Lightning Talks 1B-1 »
Qitian Wu · Runlin Lei · Rongqin Chen · Luca Pinchetti · Yangze Zhou · Abhinav Kumar · Hans Hao-Hsun Hsu · Wentao Zhao · Chenhao Tan · Zhen Wang · Shenghui Zhang · Yuesong Shen · Tommaso Salvatori · Gitta Kutyniok · Zenan Li · Amit Sharma · Leong Hou U · Yordan Yordanov · Christian Tomani · Bruno Ribeiro · Yaliang Li · David P Wipf · Daniel Cremers · Bolin Ding · Beren Millidge · Ye Li · Yuhang Song · Junchi Yan · Zhewei Wei · Thomas Lukasiewicz -
2022 Poster: Injecting Domain Knowledge from Empirical Interatomic Potentials to Neural Networks for Predicting Material Properties »
Zeren Shui · Daniel Karls · Mingjian Wen · ilia Nikiforov · Ellad Tadmor · George Karypis -
2021 Poster: Reconstruction for Powerful Graph Representations »
Leonardo Cotta · Christopher Morris · Bruno Ribeiro -
2020 Poster: Unsupervised Joint k-node Graph Representations with Compositional Energy-Based Models »
Leonardo Cotta · Carlos H. C. Teixeira · Ananthram Swami · Bruno Ribeiro