Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Shared Visual Representations in Human and Machine Intelligence

Evaluating the Adversarial Robustness of a Foveated Texture Transform Module in a CNN

Jonathan Gant · Andrzej Banburski · Arturo Deza


Abstract:

Biologically inspired mechanisms such as foveation and multiple fixation points have previously been shown to help alleviate adversarial examples (Reddy et al., 2020). By mimicking the effects of visual crowding present in human vision, foveated, texture-based computations may provide another route for increasing adversarial robustness. Previous statistical models of texture rendering (Portilla & Simoncelli, 2000; Gatys et al., 2015) paved the way for the development of a Foveated Texture Transform (FTT) module which utilizes localized texture synthesis in foveated receptive fields (Deza et al., 2017). The FTT module was added to a VGG-11 CNN architecture and ten random initializations were trained on 20-class subsets of the Places and EcoSet datasets for scene and object classification respectively. The trained networks were attacked using Projected Gradient Descent (PGD) and the adversarial accuracies were calculated at multiple epochs to evaluate changes in robustness as the networks trained. The results indicate that the FTT module significantly improved adversarial robustness for scene classification, especially when the validation loss was at a minimum. However, the FTT module did not provide a statistically significant increase in adversarial robustness for object classification. Furthermore, we do not find a trade-off between accuracy and robustness (Tsipras et al., 2018) for the FTT module suggesting a benefit of using foveated, texture-based distortions in the latent space during learning compared to non-perturbed latent space representations. Finally, we investigate the nature of latent space distortions with additional controls that probe other directions in the latent space that are not texture-based.

Chat is not available.