Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Synthetic Data for Empowering ML Research

Invited Talk #1, Differentially Private Learning with Margin Guarantees, Mehryar Mohri

Mehryar Mohri


Abstract:

Title: Differentially Private Learning with Margin Guarantees

Abstract:

Preserving privacy is a crucial objective for machine learning algorithms. But, despite the remarkable theoretical and algorithmic progress in differential privacy over the last decade or more, its application to learning still faces several obstacles.

A recent series of publications have shown that differentially private PAC learning of infinite hypothesis sets is not possible, even for common hypothesis sets such as that of linear functions. Another rich body of literature has studied differentially private empirical risk minimization in a constrained optimization setting and shown that the guarantees are necessarily dimension-dependent. In the unconstrained setting, dimension-independent bounds have been given, but they admit a dependency on the norm of a vector that can be extremely large, which makes them uninformative.

These results raise some fundamental questions about private learning with common high-dimensional problems: is differentially private learning with favorable (dimension-independent) guarantees possible for standard hypothesis sets?

This talk presents a series of new differentially private algorithms for learning linear classifiers, kernel classifiers, and neural-network classifiers with dimension-independent, confidence-margin guarantees.

Joint work with Raef Bassily and Ananda Theertha Suresh.

Chat is not available.