Skip to yearly menu bar Skip to main content


Demonstration

exBERT: A Visual Analysis Tool to Explain BERT's Learned Representations

Benjamin Hoover · Hendrik Strobelt · Sebastian Gehrmann

East Exhibition Hall B, C #801

Abstract:

Large language models can produce powerful contextual representations that lead to improvements across many NLP tasks. Though these models can comprise undesired inductive biases, it is challenging to identify what information they encode in their learned representations. Since the model-internal reasoning process is often guided by a sequence of learned self-attention mechanisms, it is paramount to be able to explore what the attention has learned. While static analyses for this can lead to targeted insights, interactive tools can be more dynamic and help humans gain an intuition for the model-internal reasoning process. We present exBERT, a tool that helps gather insights into the meaning of contextual representations. exBERT matches a human-specified input to similar contexts in a large annotated dataset. By aggregating these annotations across all similar contexts, exBERT can help to explain what each attention-head has learned.

Live content is unavailable. Log in and register to view live content