Skip to yearly menu bar Skip to main content


Poster

LAMP: Extracting Text from Gradients with Language Model Priors

Mislav Balunovic · Dimitar Dimitrov · Nikola Jovanović · Martin Vechev

Hall J (level 1) #527

Keywords: [ gradient leakage ] [ Natural Language Processing ] [ privacy ] [ federated learning ]


Abstract: Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning. While success was demonstrated primarily on image data, these methods do not directly transfer to other domains such as text. In this work, we propose LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients. Our attack is based on two key insights: (i) modelling prior text probability via an auxiliary language model, guiding the search towards more natural text, and (ii) alternating continuous and discrete optimization which minimizes reconstruction loss on embeddings while avoiding local minima via discrete text transformations. Our experiments demonstrate that LAMP is significantly more effective than prior work: it reconstructs 5x more bigrams and $23\%$ longer subsequences on average. Moreover, we are first to recover inputs from batch sizes larger than 1 for textual models. These findings indicate that gradient updates of models operating on textual data leak more information than previously thought.

Chat is not available.