Skip to yearly menu bar Skip to main content


Poster

HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction

Qianyue Hao · Jingyang Fan · Fengli Xu · Jian Yuan · Yong Li


Abstract: Citation networks are one of the key infrastructures of modern science, interweaving previous literature and allowing researchers to navigate the knowledge production system. To mine information hiding in the link space of such networks, predicting which previous papers (candidates) will a new paper (query) cite is a critical problem that has long been studied. However, there remains an unconsidered problem. The roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts, where distinguishing them requires understandings on the logical relationships among papers beyond simple edges in citation networks. The emerging textual reasoning ability of LLMs shed lights on revealing the logical relationships among papers, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the texts of all candidates are far beyond LLMs' reasoning context length. Second, the logical relationships are implicitly and directly prompting an LLM to predict citations leads to results based on simple textual similarities rather than logical reasoning on relationships. In this paper, we define the novel concept of core citation to stress the important citations out of the superficial ones. Thereby, we evolve the citation prediction task from simple binary classification to distinguishing core citations from superficial citations and non-citations. Then we propose $\textbf{HLM-Cite}$, a $\textbf{H}$ybrid $\textbf{L}$anguage $\textbf{M}$odel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidate sets and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the two-stage pipeline, we can scale the candidate sets to 100K papers, thousands of times larger than existing works. We evaluate HLM-Cite on a dataset across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods. Our code is open-source at https://anonymous.4open.science/r/H-LM-7D36 for reproducibility.

Live content is unavailable. Log in and register to view live content