Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Shared Visual Representations in Human and Machine Intelligence

CIFAR-10H: using human-derived soft-label distributions to support more robust and generalizable classification

Ruairidh Battleday


Abstract:

The classification performance of deep neural networks has begun to asymptote at near-perfect levels on natural image benchmarks. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. Humans, by contrast, exhibit robust and graceful generalization far outside their set of training samples. In this talk, I will discuss one strategy for translating these properties to machine-learning classifiers: training them to be uncertain in the same way as humans, rather than always right. When we integrate human uncertainty into training paradigms by using human guess distributions as labels, we find the generalize better and are more robust to adversarial attacks. Rather than expect all image datasets to come with such labels, we instead intend our CIFAR10H dataset to be used as a gold standard, with which algorithmic means of capturing the same information can be evaluated. To illustrate this, I present one automated method that does so—deep prototype models inspired by the cognitive science literature.

Live content is unavailable. Log in and register to view live content