Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

LLM Routing with Benchmark Datasets

Tal Shnitzer · Anthony Ou · Mírian Silva · Kate Soule · Yuekai Sun · Justin Solomon · Neil Thompson · Mikhail Yurochkin

Keywords: [ Large language models ] [ benchmark datasets ] [ model selection ] [ OOD generalization ]

[ ] [ Project Page ]
Fri 15 Dec 11:25 a.m. PST — 11:35 a.m. PST

Abstract:

There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. In this work, we address the challenge of selecting the best LLM out of a collection of models for new tasks. We propose a new formulation for the problem, in which benchmark datasets are repurposed to learn a ``router'' model for this LLM selection, and we show that this problem can be reduced to a collection of binary classification tasks. We demonstrate the utility and limitations of learning model routers from various benchmark datasets.

Chat is not available.