Timezone: »
The short-answer: it depends! The long-answer is that this dependence is modulated by several factors including the architecture, dataset, optimizer and initialization. In general, this modulation is likely due to the fact that artificial perceptual systems are best suited for tasks that are aligned with their level of compositionality, so when these perceptual systems are optimized to perform a global task such as average color estimation instead of object recognition (which is compositional), different representations emerge in the optimized networks. In this paper, we first assess the novelty of our experiment and define what an adversarial example is in the context of the color estimation task. We then run controlled experiments in which we vary 4 variables in a highly controlled way pertaining neural network hyper-parameters such as: 1) the architecture, 2) the optimizer, 3) the dataset, and 4) the weight initializations. Generally, we find that a fully connected network's attack vector is more sparse than a compositional CNN's, although the SGD optimizer will modulate the attack vector to be less sparse regardless of the architecture. We also discover that the attack vector of a CNN is more consistent across varying datasets and confirm that the CNN is more robust to attacks of adversarial color. Altogether, this paper presents a first computational exploration of the qualitative assessment of the adversarial perception of color in simple neural network models, re-emphasizing that studies in adversarial robustness and vulnerability should extend beyond object recognition.
Author Information
John Chin (Massachusetts Institute of Technology)
John is currently a senior in high school (homeschooled) and an avid pursuer of all things computer vision related. In his junior and senior year, he interned at MIT’s Center for Brains, Minds, and Machines, working at Professor Tomaso Poggio’s lab under adviser Arturo Deza to investigate the adversarial robustness for non object recognition tasks. Most recently, he submitted a paper to SVRHM on adversarial color perception, and he plans to research biologically plausible mechanisms for adversarial robustness throughout college.
Arturo Deza (Artificio)
More from the Same Authors
-
2022 : Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4 »
William Berrios · Arturo Deza -
2022 : Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4 »
William Berrios · Arturo Deza -
2022 : Closing Remarks, Award Ceremony and Reception »
Arturo Deza -
2022 Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM) »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2021 : Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks »
Anne Harrington · Arturo Deza -
2021 : Evaluating the Adversarial Robustness of a Foveated Texture Transform Module in a CNN »
Jonathan Gant · Andrzej Banburski · Arturo Deza -
2021 : On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation »
Binxu Wang · David Mayo · Arturo Deza · Andrei Barbu · Colin Conwell -
2021 : What Matters In Branch Specialization? Using a Toy Task to Make Predictions »
Chenguang Li · Arturo Deza -
2021 Workshop: Shared Visual Representations in Human and Machine Intelligence »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2020 Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM) »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2019 : Concluding Remarks & Prizes Ceremony »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths -
2019 : Opening Remarks »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths -
2019 Workshop: Shared Visual Representations in Human and Machine Intelligence »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths