Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs
Jacob Portes · Connor Jennings · Erica Yuen · Sasha Doubov · Michael Carbin
Abstract
How does retrieval performance scale with pretraining FLOPs? We benchmark retrieval performance across LLM model sizes from 125 million parameters to 7 billion parameters pretrained on datasets ranging from 1 billion tokens to more than 2 trillion tokens. We find that retrieval performance on zero-shot BEIR tasks predictably scales with LLM size, training duration, and estimated FLOPs. We also show that In-Context Learning scores are strongly correlated with retrieval scores across retrieval tasks. Finally, we highlight the implications this has for the development of LLM-based retrievers.
Chat is not available.
Successful Page Load