Skip to yearly menu bar Skip to main content


Poster

Deep Leakage from Gradients

Ligeng Zhu · Zhijian Liu · Song Han

East Exhibition Hall B + C #154

Keywords: [ Privacy, Anonymity, and Security ] [ Algorithms -> Large Scale Learning; Applications -> Computer Vision; Applications ] [ Deep Learning ]


Abstract:

Passing gradient is a widely used scheme in modern multi-node learning system (e.g, distributed training, collaborative learning). In a long time, people used to believe that gradients are safe to share: i.e, the training set will not be leaked by gradient sharing. However, in this paper, we show that we can obtain the private training set from the publicly shared gradients. The leaking only takes few gradient steps to process and can obtain the original training set instead of look-alike alternatives. We name this leakage as \textit{deep leakage from gradient} and practically validate the effectiveness of our algorithm on both computer vision and natural language processing tasks. We empirically show that our attack is much stronger than previous approaches and thereby and raise people's awareness to rethink the gradients' safety. We also discuss some possible strategies to defend this deep leakage.

Live content is unavailable. Log in and register to view live content