Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning and the Physical Sciences

Wavelets Beat Monkeys at Adversarial Robustness

Jingtong Su · Julia Kempe


Abstract:

Research on improving the robustness of neural networks to adversarial noise - imperceptible malicious perturbations of the data - has received significant attention. Neural nets struggle to recognize corrupted images that are easily recognized by humans. The currently uncontested state-of-the-art defence to obtain robust deep neural networks is Adversarial training (AT), but it consumes significantly more resources compared to standard training and trades off accuracy for robustness.An inspiring recent work \citep{dapello2020simulating} aims to bring neurobiological tools to the question: How can we develop Neural Nets that robustly generalize like human vision? They design a network structure with a neural hidden first layer that mimics the primate primary visual cortex (V1), followed by a back-end structure adapted from current CNN vision models. This front-end layer, called VOneBlock, consists of a biologically inspired Gabor Filter Bank with fixed handcrafted "biologically constrained" weights, simple and complex cell non-linearities and a "V1 stochasticity generator” injecting randomness. It seems to achieve non-trivial adversarial robustness on standard vision benchmarks when tested on small perturbations.Here we revisit this biologically inspired work, which heavily relies on handcrafted tuning of the parameters of the V1 unit based on neural responses derived from experimental records of macaque monkeys. We ask whether a principled parameter-free representation with inspiration from physics is able to achieve the same goal. We discover that the wavelet scattering transform can replace the complex V1-cortex and simple uniform Gaussian noise can take the role of neural stochasticity, to achieve adversarial robustness.In extensive experiments on the CIFAR-10 benchmark with adaptive adversarial attacks we show that: 1) Robustness of VOneBlock architectures is relatively weak (though non-zero) when the strength of the adversarial attack radius is set to commonly used benchmarks. 2) Replacing the front-end VOneBlock by an off-the-shelf parameter-free Scatternet followed by simple uniform Gaussian noise can achieve much more substantial adversarial robustness without adversarial training. Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.Physics, rather than only neuroscience, can guide us towardsmore robust neural networks.

Chat is not available.