Block ModShift: Model Privacy via Dynamic Designed Shifts
Abstract
The problem of mutli-shot model privacy against an eavesdropper (Eve) in a distributed learning environment is investigated. The solution is found via evaluating the Fisher Information Matrix (FIM) for the model learning problem for Eve. Through a model shift design process, the eavesdropper's FIM can be driven to singularity, yielding a provably hard estimation problem for Eve. The solution has time-varying shifts that prevent Eve from using the temporal correlation of the updates to aid her in her estimation. A convergence test for Eve is designed to determine if model updates have been tampered with. However, the Block ModShift strategy passes the test and thus the shifts are not detectable. Block ModShift is compared against a noise injection scheme and shown to offer superior performance. We numerically show the efficacy of Block ModShift in preventing temporal leakage in a setup biased towards Eve's learning ability where she uses Kalman smoothing to estimate updates.