Skip to yearly menu bar Skip to main content


Poster

The Representation Landscape of Few-Shot Learning and Fine-Tuning in Large Language Models

Diego Doimo · Alessandro Serra · Alessio Ansuini · Alberto Cazzaniga

East Exhibit Hall A-C #3303
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In-context learning (ICL) and supervised fine-tuning (SFT) are two common strategies for improving the performance of modern large language models (LLMs) on specific tasks. Despite their different natures, these strategies often lead to comparable performance gains under similar data constraints. However, little is known about whether they induce similar representations inside LLMs. We approach this problem by analyzing the probability landscape of their hidden representations in the two cases. More specifically, we compare how LLMs solve the same question-answering task, finding that ICL and SFT create very different internal structures, in both cases undergoing a sharp transition in the middle of the network. In the first half of the network, ICL shapes interpretable representations hierarchically organized according to their semantic content. In contrast, the probability landscape obtained with SFT is fuzzier and semantically mixed. In the second half of the model, the fine-tuned representations develop probability modes that better encode the identity of answers, while the landscape of ICL representations is characterized by less defined peaks. Our approach reveals the diverse computational strategies developed inside LLMs to solve the same task across different conditions, allowing us to make a step towards designing optimal methods to extract information from language models.

Live content is unavailable. Log in and register to view live content