Skip to yearly menu bar Skip to main content


Poster

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans

Gamaleldin Elsayed · Shreya Shankar · Brian Cheung · Nicolas Papernot · Alexey Kurakin · Ian Goodfellow · Jascha Sohl-Dickstein

Room 517 AB #108

Keywords: [ Deep Learning ] [ Computer Vision ] [ Adversarial Networks ] [ Computational Biology and Bioinformatics ] [ Neuroscience ] [ Visual Perception ] [ CNN Architectures ] [ Neuroscience and cognitive science ]


Abstract:

Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.

Live content is unavailable. Log in and register to view live content