Skip to yearly menu bar Skip to main content


Poster

Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models

Minki Kang · Sung Ju Hwang · Gibbeum Lee · Jaewoong Cho

East Exhibit Hall A-C #3308
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In this paper, we aim to improve Large Language Models (LLMs) in their ability to learn new knowledge without relying on external memory. This is done by fine-tuning LLMs with documents containing new facts, which we refer to as knowledge injection. We observed that fine-tuning LLMs with paraphrased data outperforms models fine-tuned without such augmentation, suggesting that this is a better approach for knowledge injection. However, paraphrasing at the data level brings in diversity and efficiency challenges, as it requires repetitive external model interventions to augment data whenever new knowledge should be injected. To address these limitations, we propose augmenting not only at the data level but also introducing a small perturbation layer at the latent feature level of the LLM to make it more robust. This perturbation layer is designed to approximate the latent feature distribution of paraphrased text based on the original text’s distribution within the LLM. Preliminary results suggest that this method can significantly improve performance without necessitating external models for paraphrasing, and when combined with data-level augmentation, it can further enhance the model's performance. This approach promises more efficient and effective adaptation of LLMs to new knowledge, reducing the need for external augmentation models.

Live content is unavailable. Log in and register to view live content