Skip to yearly menu bar Skip to main content


Poster

Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs

Abhimanyu Hans · John Kirchenbauer · Yuxin Wen · Neel Jain · Hamid Kazemi · Prajwal Singhania · Siddharth Singh · Gowthami Somepalli · Jonas Geiping · Abhinav Bhatele · Tom Goldstein

East Exhibit Hall A-C #4709
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss. During training, a randomly sampled subsets of tokens are excluded from the loss computation. These dropped tokens are not memorized by the model, which prevents verbatim reproduction of a complete chain of tokens from the training set. We run extensive experiments training billion-scale LLaMA-2 models, both pre-trained and trained from scratch, and demonstrate significant reductions in extractable memorization with little to no impact on downstream benchmarks.Code and checkpoints: https://github.com/ahans30/goldfish-loss

Live content is unavailable. Log in and register to view live content