`

Timezone: »

 
QUOTIENT: Two-Party Secure Neural Network Training & Prediction
Nitin Agrawal · Matt Kusner · Adria Gascon

Sat Dec 14 09:25 AM -- 09:45 AM (PST) @

Recently, there has been a wealth of effort devoted to the design of secure protocols for machine learning tasks. Much of this is aimed at enabling secure prediction from highly-accurate Deep Neural Networks (DNNs). However, as DNNs are trained on data, a key question is how such models can be also trained securely. The few prior works on secure DNN training have focused either on designing custom protocols for existing training algorithms or on developing tailored training algorithms and then applying generic secure protocols. In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts. We present QUOTIENT, a new method for discretized training of DNNs, along with a customized secure two-party protocol for it. QUOTIENT incorporates key components of state-of-the-art DNN training such as layer normalization and adaptive gradient methods, and improves upon the state-of-the-art in DNN training in two-party computation. Compared to prior work, we obtain an improvement of 50X in WAN time and 6% in absolute accuracy.

Author Information

Nitin Agrawal (University of Oxford)
Matt Kusner (University of Oxford)
Adria Gascon (Alan Turing Institute and Warwick university)

More from the Same Authors

  • 2020 Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition »
    Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller
  • 2019 : Poster Session »
    Clement Canonne · Kwang-Sung Jun · Seth Neel · Di Wang · Giuseppe Vietri · Liwei Song · Jonathan Lebensold · Huanyu Zhang · Lovedeep Gondara · Ang Li · FatemehSadat Mireshghallah · Jinshuo Dong · Anand D Sarwate · Antti Koskela · Joonas Jälkö · Matt Kusner · Dingfan Chen · Mi Jung Park · Ashwin Machanavajjhala · Jayashree Kalpathy-Cramer · · Vitaly Feldman · Andrew Tomkins · Hai Phan · Hossein Esfandiari · Mimansa Jaiswal · Mrinank Sharma · Jeff Druce · Casey Meehan · Zhengli Zhao · Hsiang Hsu · Davis Railsback · Abraham Flaxman · · Julius Adebayo · Aleksandra Korolova · Jiaming Xu · Naoise Holohan · Samyadeep Basu · Matthew Joseph · My Thai · Xiaoqian Yang · Ellen Vitercik · Michael Hutchinson · Chenghong Wang · Gregory Yauney · Yuchao Tao · Chao Jin · Si Kai Lee · Audra McMillan · Rauf Izmailov · Jiayi Guo · Siddharth Swaroop · Tribhuvanesh Orekondy · Hadi Esmaeilzadeh · Kevin Procopio · Alkis Polyzotis · Jafar Mohammadi · Nitin Agrawal
  • 2018 Workshop: Machine Learning for Molecules and Materials »
    José Miguel Hernández-Lobato · Klaus-Robert Müller · Brooks Paige · Matt Kusner · Stefan Chmiela · Kristof Schütt
  • 2018 Workshop: Privacy Preserving Machine Learning »
    Adria Gascon · Aurélien Bellet · Niki Kilbertus · Olga Ohrimenko · Mariana Raykova · Adrian Weller
  • 2018 Workshop: Critiquing and Correcting Trends in Machine Learning »
    Thomas Rainforth · Matt Kusner · Benjamin Bloem-Reddy · Brooks Paige · Rich Caruana · Yee Whye Teh
  • 2017 Workshop: Machine Learning for Molecules and Materials »
    Kristof Schütt · Klaus-Robert Müller · Anatole von Lilienfeld · José Miguel Hernández-Lobato · Klaus-Robert Müller · Alan Aspuru-Guzik · Bharath Ramsundar · Matt Kusner · Brooks Paige · Stefan Chmiela · Alexandre Tkatchenko · Anatole von Lilienfeld · Koji Tsuda
  • 2017 Poster: Counterfactual Fairness »
    Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva
  • 2017 Oral: Counterfactual Fairness »
    Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva
  • 2017 Poster: When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness »
    Chris Russell · Matt Kusner · Joshua Loftus · Ricardo Silva