Skip to yearly menu bar Skip to main content


Poster

Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization

Dingshuo Chen · Zhixun Li · Yuyan Ni · Guibin Zhang · Ding Wang · Qiang Liu · Shu Wu · Jeffrey Yu · Liang Wang

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

With the emergence of various molecular tasks and massive datasets, how to perform efficient training has become an urgent yet under-explored issue in the area. Data pruning (DP), as an oft-stated approach to saving training burdens, filters out less influential samples to form a coreset for training. However, the increasing reliance on pretrained models for molecular tasks renders traditional in-domain DP methods incompatible. Therefore, we propose a Molecular data Pruning framework for enhanced Generalization (MolPeg), which focuses on the source-free data pruning scenario, where data pruning is applied with pretrained models. By maintaining two models with different updating paces during training, we introduce a novel scoring function to measure the informativeness of samples based on the loss discrepancy. As a plug-and-play framework, MolPeg realizes the perception of both source and target domain and consistently outperforms existing DP methods across four downstream tasks. Remarkably, it can surpass the performance obtained from full-dataset training, even when pruning up to 60-70% of the data on HIV and PCBA dataset. Our work suggests that the discovery of effective data-pruning metrics could provide a viable path to both enhanced efficiency and superior generalization in transfer learning.

Live content is unavailable. Log in and register to view live content