Workshop: Workshop on Dataset Curation and Security
Nathalie Baracaldo Angel, Yonatan Bisk, Avrim Blum, Michael Curry, John Dickerson, Micah Goldblum, Tom Goldstein, Bo Li, Avi Schwarzschild
2020-12-11T06:00:00-08:00 - 2020-12-11T11:00:00-08:00
Abstract: Classical machine learning research has been focused largely on models, optimizers, and computational challenges. As technical progress and hardware advancements ease these challenges, practitioners are now finding that the limitations and faults of their models are the result of their datasets. This is particularly true of deep networks, which often rely on huge datasets that are too large and unwieldy for domain experts to curate them by hand. This workshop addresses issues in the following areas: data harvesting, dealing with the challenges and opportunities involved in creating and labeling massive datasets; data security, dealing with protecting datasets against risks of poisoning and backdoor attacks; policy, security, and privacy, dealing with the social, ethical, and regulatory issues involved in collecting large datasets, especially with regards to privacy; and data bias, related to the potential of biased datasets to result in biased models that harm members of certain groups. Dates and details can be found at [securedata.lol](https://securedata.lol/)
Chat is not available.
2020-12-11T06:00:00-08:00 - 2020-12-11T06:30:00-08:00
Dawn Song (topic TBD)
2020-12-11T06:30:00-08:00 - 2020-12-11T07:00:00-08:00
What Do Our Models Learn?
Large-scale vision benchmarks have driven—and often even defined—progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks? In this talk, I will discuss the alignment between our benchmark-driven ML paradigm and the real-world uses cases that motivate it. First, we will explore examples of biases in the ImageNet dataset, and how state-of-the-art models exploit them. We will then demonstrate how these biases arise as a result of design choices in the data collection and curation processes. Throughout, we illustrate how one can leverage relatively standard tools (e.g., crowdsourcing, image processing) to quantify the biases that we observe. Based on joint works with Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras and Kai Xiao.
2020-12-11T07:00:00-08:00 - 2020-12-11T07:15:00-08:00
2020-12-11T07:15:00-08:00 - 2020-12-11T07:30:00-08:00
2020-12-11T07:30:00-08:00 - 2020-12-11T08:00:00-08:00
Darrell West (TBD)
2020-12-11T08:00:00-08:00 - 2020-12-11T08:30:00-08:00
Adversarial, Socially Aware, and Commonsensical Data
2020-12-11T08:30:00-08:00 - 2020-12-11T08:45:00-08:00
2020-12-11T08:45:00-08:00 - 2020-12-11T10:00:00-08:00
2020-12-11T10:00:00-08:00 - 2020-12-11T10:30:00-08:00
Dataset Curation via Active Learning
2020-12-11T10:30:00-08:00 - 2020-12-11T11:00:00-08:00
Don't Steal Data