Timezone: »

What does an Adversarial Color look like?
John Chin · Arturo Deza
Event URL: https://openreview.net/forum?id=yigR5nrCITN »

The short-answer: it depends! The long-answer is that this dependence is modulated by several factors including the architecture, dataset, optimizer and initialization. In general, this modulation is likely due to the fact that artificial perceptual systems are best suited for tasks that are aligned with their level of compositionality, so when these perceptual systems are optimized to perform a global task such as average color estimation instead of object recognition (which is compositional), different representations emerge in the optimized networks. In this paper, we first assess the novelty of our experiment and define what an adversarial example is in the context of the color estimation task. We then run controlled experiments in which we vary 4 variables in a highly controlled way pertaining neural network hyper-parameters such as: 1) the architecture, 2) the optimizer, 3) the dataset, and 4) the weight initializations. Generally, we find that a fully connected network's attack vector is more sparse than a compositional CNN's, although the SGD optimizer will modulate the attack vector to be less sparse regardless of the architecture. We also discover that the attack vector of a CNN is more consistent across varying datasets and confirm that the CNN is more robust to attacks of adversarial color. Altogether, this paper presents a first computational exploration of the qualitative assessment of the adversarial perception of color in simple neural network models, re-emphasizing that studies in adversarial robustness and vulnerability should extend beyond object recognition.

Author Information

John Chin (Massachusetts Institute of Technology)

John is currently a senior in high school (homeschooled) and an avid pursuer of all things computer vision related. In his junior and senior year, he interned at MIT’s Center for Brains, Minds, and Machines, working at Professor Tomaso Poggio’s lab under adviser Arturo Deza to investigate the adversarial robustness for non object recognition tasks. Most recently, he submitted a paper to SVRHM on adversarial color perception, and he plans to research biologically plausible mechanisms for adversarial robustness throughout college.

Arturo Deza (Artificio)

More from the Same Authors