Connecting Membership Inference Privacy and Generalization through Instance-Wise Measurements
Abstract
A prevailing intuition is that decreasing the amount of information in a neural network should improve both privacy and generalization. Despite the intuitive connection, theoretical works studying this in the context of differential privacy have mostly studied the generalization guarantees given some privacy parameter, but not vice versa. Both theoretical and empirical work has suggested that regularization, whether implicit or explicit, has disproportionate effects on privacy risk across the individual points in the training data. In this work, we take a first step towards understanding instance-wise privacy and its connection to generalization by deriving an instance-wise measurement of membership inference privacy. We then connect this definition to generalization bounds using a data-dependent prior on the weight distribution.