Timezone: »

Variational Bayesian Unlearning
Quoc Phong Nguyen · Bryan Kian Hsiang Low · Patrick Jaillet

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1770

This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased. We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data. Using the variational inference (VI) framework, we show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief given the full data (i.e., including the remaining data); the latter prevents catastrophic unlearning that can render the model useless. In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging. We propose two novel tricks to tackle this challenge. We empirically demonstrate our unlearning methods on Bayesian models such as sparse Gaussian process and logistic regression using synthetic and real-world datasets.

Author Information

Quoc Phong Nguyen (National University of Singapore)
Bryan Kian Hsiang Low (National University of Singapore)
Patrick Jaillet (MIT)

More from the Same Authors