Skip to yearly menu bar Skip to main content


Poster

Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models

Zhanhui Zhou · Zhixuan Liu · Jie Liu · Zhichen Dong · Chao Yang · Yu Qiao


Abstract: Large language models are usually fine-tuned to align with human intent.However, fine-tuning a large language model can be challenging.In this work, we introduce *weak-to-strong search*, framing the alignment of a large language model as a test-time greedy search to maximize the log-likelihood difference between small tuned and untuned models while sampling from the frozen large model.This method serves both as (i) a compute-efficient model up-scaling strategy that bypasses directly tuning the large model and as (ii) an instance of weak-to-strong generalization that enhances a strong model with weak test-time guidance.Empirically, we demonstrate the flexibility of weak-to-strong search across different tasks. In controlled-sentiment generation and summarization, we use tuned and untuned $\texttt{GPT-2}$s to effectively improve the alignment of large models without additional training. Crucially, in a more difficult instruction-following benchmark, AlpacaEval 2.0, we show that reusing off-the-shelf small model pairs (e.g., $\texttt{zephyr-7b-beta}$ and its untuned version) can significantly improve the length-controlled win rates of both white-box and black-box large models against $\texttt{gpt-4-turbo}$ (e.g., $34.4 \rightarrow 37.9$ for $\texttt{Llama-3-Instruct-70B}$ and $16.0 \rightarrow 20.1$ for $\texttt{gpt-3.5-turbo-instruct})$, despite the small models' low win rates $\approx 10.0$.

Live content is unavailable. Log in and register to view live content