Timezone: »
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
William Berrios · Arturo Deza
Event URL: https://openreview.net/forum?id=OruB0mPF1Ef »
Modern high-scoring models of vision in the brain score competition do not stem from Vision Transformers. However, in this paper, we provide evidence against the unexpected trend of Vision Transformers (ViT) being not perceptually aligned with human visual representations by showing how a dual-stream Transformer, a CrossViT $~\textit{a la}$ Chen et. al. (2021), under a joint rotationally-invariant and adversarial optimization procedure yields 2nd place in the aggregate Brain-Score 2022 competition (Schrimpf et al., 2020b) averaged across all visual categories, and at the time of the competition held 1st place for the highest explainable variance of area V4. In addition, our current Transformer-based model also achieves greater explainable variance for areas V4, IT, and Behaviour than a biologically-inspired CNN (ResNet50) that integrates a frontal V1-like computation module (Dapello et al., 2020). To assess the contribution of the optimization scheme with respect to the CrossViT architecture, we perform several additional experiments on differently optimized CrossViT's regarding adversarial robustness, common corruption benchmarks, mid-ventral stimuli interpretation, and feature inversion. Against our initial expectations, our family of results provides tentative support for an $\textit{``All roads lead to Rome''}$ argument enforced via a joint optimization rule even for non biologically-motivated models of vision such as Vision Transformers.
Modern high-scoring models of vision in the brain score competition do not stem from Vision Transformers. However, in this paper, we provide evidence against the unexpected trend of Vision Transformers (ViT) being not perceptually aligned with human visual representations by showing how a dual-stream Transformer, a CrossViT $~\textit{a la}$ Chen et. al. (2021), under a joint rotationally-invariant and adversarial optimization procedure yields 2nd place in the aggregate Brain-Score 2022 competition (Schrimpf et al., 2020b) averaged across all visual categories, and at the time of the competition held 1st place for the highest explainable variance of area V4. In addition, our current Transformer-based model also achieves greater explainable variance for areas V4, IT, and Behaviour than a biologically-inspired CNN (ResNet50) that integrates a frontal V1-like computation module (Dapello et al., 2020). To assess the contribution of the optimization scheme with respect to the CrossViT architecture, we perform several additional experiments on differently optimized CrossViT's regarding adversarial robustness, common corruption benchmarks, mid-ventral stimuli interpretation, and feature inversion. Against our initial expectations, our family of results provides tentative support for an $\textit{``All roads lead to Rome''}$ argument enforced via a joint optimization rule even for non biologically-motivated models of vision such as Vision Transformers.
Author Information
William Berrios (Universidad Nacional de IngenierĂa)
Arturo Deza (Artificio)
More from the Same Authors
-
2022 : What does an Adversarial Color look like? »
John Chin · Arturo Deza -
2022 : Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4 »
William Berrios · Arturo Deza -
2022 : Closing Remarks, Award Ceremony and Reception »
Arturo Deza -
2022 Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM) »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2022 : Oral Presentation 5: Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4 »
William Berrios -
2021 : Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks »
Anne Harrington · Arturo Deza -
2021 : Evaluating the Adversarial Robustness of a Foveated Texture Transform Module in a CNN »
Jonathan Gant · Andrzej Banburski · Arturo Deza -
2021 : On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation »
Binxu Wang · David Mayo · Arturo Deza · Andrei Barbu · Colin Conwell -
2021 : What Matters In Branch Specialization? Using a Toy Task to Make Predictions »
Chenguang Li · Arturo Deza -
2021 Workshop: Shared Visual Representations in Human and Machine Intelligence »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2020 Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM) »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2019 : Concluding Remarks & Prizes Ceremony »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths -
2019 : Opening Remarks »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths -
2019 Workshop: Shared Visual Representations in Human and Machine Intelligence »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths