Skip to yearly menu bar Skip to main content


Poster

Pseudo-Private Data Guided Model Inversion Attacks

Xiong Peng · Bo Han · Feng Liu · Tongliang Liu · Mingyuan Zhou

East Exhibit Hall A-C #4605
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In model inversion attacks (MIAs), adversaries attempt to recover private training data by exploiting access to a well-trained target model. Recent advancements have improved MIA performance using a two-stage generative framework. This approach first employs a generative adversarial network to learn a fixed distributional prior, which is then used to guide the inversion process during the attack. However, in this paper, we observed a phenomenon that such a fixed prior would lead to a low probability of sampling actual private data during the inversion process due to the inherent distribution gap between the prior distribution and the private data distribution, thereby constraining attack performance. To address this limitation, we propose increasing the density around high-quality pseudo-private data—recovered samples through model inversion that exhibit characteristics of the private training data—by slightly tuning the generator. This strategy effectively increases the probability of sampling actual private data that is close to these pseudo-private data during the inversion process. After integrating our method, the generative model inversion pipeline is strengthened, leading to improvements over state-of-the-art MIAs. This paves the way for new research directions in generative MIAs.

Live content is unavailable. Log in and register to view live content