Timezone: »

Spotlight Poster
Counterfactual Memorization in Neural Language Models
Chiyuan Zhang · Daphne Ippolito · Katherine Lee · Matthew Jagielski · Florian Tramer · Nicholas Carlini

Tue Dec 12 03:15 PM -- 05:15 PM (PST) @ Great Hall & Hall B1+B2 #1506

Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.Understanding this memorization is important in real world applications and also from a learning-theoretical perspective. An open question in previous studies of language model memorization is how to filter out ``common'' memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing memorized familiar phrases, public knowledge, templated texts, or other repeated data.We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.We identify and study counterfactually-memorized training examples in standard text datasets.We estimate the influence of each memorized training example on the validation set and on generated texts, showing how this can provide direct evidence of the source of memorization at test time.

Author Information

Chiyuan Zhang (Google Research)
Daphne Ippolito (School of Engineering and Applied Science, University of Pennsylvania)
Katherine Lee (Cornell University)
Matthew Jagielski (Google DeepMind)
Florian Tramer (ETH Zurich)
Nicholas Carlini (Google)

More from the Same Authors