Skip to yearly menu bar Skip to main content


Poster

Full-Gradient Representation for Neural Network Visualization

Suraj Srinivas · François Fleuret

East Exhibition Hall B, C #167

Keywords: [ Visualization or Exposition Techniques for Deep Networks ] [ Deep Learning ] [ Supervised Deep Networks ] [ Applications -> Computer Vision; Deep Learning ]


Abstract:

We introduce a new tool for interpreting neural nets, namely full-gradients, which decomposes the neural net response into input sensitivity and per-neuron sensitivity components. This is the first proposed representation which satisfies two key properties: completeness and weak dependence, which provably cannot be satisfied by any saliency map-based interpretability method. Using full-gradients, we also propose an approximate saliency map representation for convolutional nets dubbed FullGrad, obtained by aggregating the full-gradient components.

We experimentally evaluate the usefulness of FullGrad in explaining model behaviour with two quantitative tests: pixel perturbation and remove-and-retrain. Our experiments reveal that our method explains model behavior correctly, and more comprehensively than other methods in the literature. Visual inspection also reveals that our saliency maps are sharper and more tightly confined to object regions than other methods.

Live content is unavailable. Log in and register to view live content