Timezone: »
We address the problem of efficiently and securely enabling certified predictions on deep learning models. This addresses the scenario where a party P1 owns a confidential model that has been certified by an authority to have a certain property e.g. fairness. Subsequently, another party P2 wants to perform a prediction on the model with an assurance that the certified model was used. We present a solution for this problem based on MPC commitments. Our constructions operate in the publicly verifiable covert (PVC) security model, which is a relaxation of the malicious model of MPC, appropriate in settings where P1 faces a reputational harm if caught cheating. We introduce the notion of a PVC commitment scheme and indexed hash functions to build commitment schemes tailored to the PVC framework, and propose constructions for both arithmetic and Boolean circuits that result in very efficient circuits. From a practical standpoint, our constructions for Boolean circuits are 60x faster to evaluate securely, and use 36x less communication than baseline methods based on hashing. Moreover, we show that our constructions are tight in terms of required non-linear operations, and present a technique to amplify the security properties of our constructions that allows to efficiently recover malicious guarantees with statistical security.
Author Information
Nitin Agrawal (University of Oxford)
James Bell (Alan Turing Institute)
Matt Kusner (University College London)
More from the Same Authors
-
2020 : Secure Single-Server Aggregation with (Poly)Logarithmic Overhead »
James Bell -
2022 : Partial identification without distributional assumptions »
Kirtan Padh · Jakob Zeitler · David Watson · Matt Kusner · Ricardo Silva · Niki Kilbertus -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2022 Poster: Local Latent Space Bayesian Optimization over Structured Inputs »
Natalie Maus · Haydn Jones · Juston Moore · Matt Kusner · John Bradshaw · Jacob Gardner -
2022 Poster: When Do Flat Minima Optimizers Work? »
Jean Kaddour · Linqing Liu · Ricardo Silva · Matt Kusner -
2021 Poster: Causal Effect Inference for Structured Treatments »
Jean Kaddour · Yuchen Zhu · Qi Liu · Matt Kusner · Ricardo Silva -
2020 Workshop: Machine Learning for Molecules »
José Miguel Hernández-Lobato · Matt Kusner · Brooks Paige · Marwin Segler · Jennifer Wei -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition »
Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller -
2020 Poster: A Class of Algorithms for General Instrumental Variable Models »
Niki Kilbertus · Matt Kusner · Ricardo Silva -
2020 Poster: Barking up the right tree: an approach to search over molecule synthesis DAGs »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2020 Spotlight: Barking up the right tree: an approach to search over molecule synthesis DAGs »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2019 Poster: A Model to Search for Synthesizable Molecules »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato