Active Model Selection for Large Language Models
Yavuz Durmazkeser · Patrik Okanovic · Andreas Kirsch · Torsten Hoefler · Nezihe Merve Gürel
Abstract
We introduce LLM SELECTOR, the first framework for active model selection of Large Language Models (LLMs). Unlike prior work on evaluation and benchmarking, LLM SELECTOR addresses the challenge of selecting the best LLM for a given task under limited annotation budgets. In particular, LLM SELECTOR adaptively identifies a small set of informative queries to annotate in order to efficiently select the best LLM for the given task. To further reduce annotation cost, we leverage a judge-based annotation model using an oracle. Through extensive experiments, we show that LLM SELECTOR reduces annotation costs by up to 58.33% for identifying the best model, and by 62.50% when selecting a near-best LLM that is within close vicinity of best model.
Chat is not available.
Successful Page Load