Workshop: Workshop on Dataset Curation and Security

Nathalie Baracaldo Angel, Yonatan Bisk, Avrim Blum, Michael Curry, John Dickerson, Micah Goldblum, Tom Goldstein, Bo Li, Avi Schwarzschild

Fri, Dec 11th, 2020 @ 14:00 – 19:00 GMT
Abstract: Classical machine learning research has been focused largely on models, optimizers, and computational challenges. As technical progress and hardware advancements ease these challenges, practitioners are now finding that the limitations and faults of their models are the result of their datasets. This is particularly true of deep networks, which often rely on huge datasets that are too large and unwieldy for domain experts to curate them by hand. This workshop addresses issues in the following areas: data harvesting, dealing with the challenges and opportunities involved in creating and labeling massive datasets; data security, dealing with protecting datasets against risks of poisoning and backdoor attacks; policy, security, and privacy, dealing with the social, ethical, and regulatory issues involved in collecting large datasets, especially with regards to privacy; and data bias, related to the potential of biased datasets to result in biased models that harm members of certain groups. Dates and details can be found at [securedata.lol](https://securedata.lol/)

Video

Chat

Chat is not available.

Schedule

14:00 – 14:30 GMT
Dawn Song (topic TBD)
Dawn Song
14:30 – 15:00 GMT
What Do Our Models Learn?
Aleksander Madry
Large-scale vision benchmarks have driven—and often even defined—progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks? In this talk, I will discuss the alignment between our benchmark-driven ML paradigm and the real-world uses cases that motivate it. First, we will explore examples of biases in the ImageNet dataset, and how state-of-the-art models exploit them. We will then demonstrate how these biases arise as a result of design choices in the data collection and curation processes. Throughout, we illustrate how one can leverage relatively standard tools (e.g., crowdsourcing, image processing) to quantify the biases that we observe. Based on joint works with Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras and Kai Xiao.
15:00 – 15:15 GMT
Discussion
15:15 – 15:30 GMT
Break
15:30 – 16:00 GMT
Darrell West (TBD)
Darrell West
16:00 – 16:30 GMT
Adversarial, Socially Aware, and Commonsensical Data
Yejin Choi
16:30 – 16:45 GMT
Discussion panel
16:45 – 18:00 GMT
Lunch Break
18:00 – 18:30 GMT
Dataset Curation via Active Learning
Robert Nowak
18:30 – 19:00 GMT
Don't Steal Data
Liz O'Sullivan