Skip to yearly menu bar Skip to main content


Poster

BERTs are Generative In-Context Learners

David Samuel

East Exhibit Hall A-C #2703
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

This paper explores the in-context learning capabilities of masked language models, challenging the common view that this ability does not `emerge' in these models. We present an embarassingly simple inference technique that enables DeBERTa to operate as a generative model without any additional training. Our findings demonstrate that DeBERTa can match and even surpass GPT-3, a model released at the same time, which famously introduced the paradigm of in-context learning. The comparative analysis reveals that the masked and causal language models behave very differently, as they clearly outperform each other on different categories of tasks. This suggests that there is great potential for a hybrid training approach that takes advantage of the strengths of both training objectives.

Live content is unavailable. Log in and register to view live content