Skip to yearly menu bar Skip to main content

Workshop: New Frontiers in Graph Learning (GLFrontiers)

Privacy-Utility Trade-offs in Neural Networks for Medical Population Graphs: Insights from Differential Privacy and Graph Structure

Tamara Mueller · Maulik Chevli · Ameya Daigavane · Daniel Rueckert · Georgios Kaissis

Keywords: [ Medical Population Graphs ] [ differential privacy ] [ graph neural networks ]


Differential privacy (DP) is the gold-standard for protecting individuals' data while enabling deep learning. It is well-established and frequently used for applications in medicine and healthcare to protect sensitive patient data. When using graph deep learning on so-called population graphs, however, the application of DP becomes more challenging compared to grid-like data structures like images or tables. In this work, we initiate an empirical investigation of differentially private graph neural networks on population graphs in the medical domain by examining privacy-utility trade-offs under different graph learning methods on both real-world and synthetic datasets. We compare two state-of-the-art methods for differentially private graph deep learning and empirically audit privacy guarantees through node membership inference and link stealing attacks. We hereby focus on the impact of the graph structure, one of the most important inherent challenges of medical population graphs. Our findings highlight the potential and the challenges of this specific DP application area. Moreover, we find evidence that the underlying graph structure constitutes a potential factor for larger performance gaps on one of the explored methods by showing a correlation between the degree of graph homophily and the accuracy of the trained model.

Chat is not available.