Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Privacy in Machine Learning (PriML) 2021

Certified Predictions using MPC-Friendly Publicly Verifiable Covertly Secure Commitments

Nitin Agrawal · James Bell · Matt Kusner


Abstract:

We address the problem of efficiently and securely enabling certified predictions on deep learning models. This addresses the scenario where a party P1 owns a confidential model that has been certified by an authority to have a certain property e.g. fairness. Subsequently, another party P2 wants to perform a prediction on the model with an assurance that the certified model was used. We present a solution for this problem based on MPC commitments. Our constructions operate in the publicly verifiable covert (PVC) security model, which is a relaxation of the malicious model of MPC, appropriate in settings where P1 faces a reputational harm if caught cheating. We introduce the notion of a PVC commitment scheme and indexed hash functions to build commitment schemes tailored to the PVC framework, and propose constructions for both arithmetic and Boolean circuits that result in very efficient circuits. From a practical standpoint, our constructions for Boolean circuits are 60x faster to evaluate securely, and use 36x less communication than baseline methods based on hashing. Moreover, we show that our constructions are tight in terms of required non-linear operations, and present a technique to amplify the security properties of our constructions that allows to efficiently recover malicious guarantees with statistical security.

Chat is not available.