Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: eXplainable AI approaches for debugging and diagnosis

[S5] Debugging the Internals of Convolutional Networks

Bilal Alsallakh · Narine Kokhlikyan · Vivek Miglani · Shubham Muttepawar · Edward Wang · Sara Zhang · Orion Reblitz-Richardson


Abstract:

The filters learned by Convolutional Neural Networks (CNNs) and the feature maps these filters compute are sensitive to convolution arithmetic. Several architectural choices that dictate this arithmetic can result in feature-map artifacts. These artifacts can interfere with the downstream task and impact the accuracy and robustness. We provide a number of visual-debugging means to surface feature-map artifacts and to analyze how they emerge in CNNs. Our means help analyze the impact of these artifacts on the weights learned by the model. Guided by our analysis, model developers can make informed architectural choices that can verifiably mitigate harmful artifacts and improve the model’s accuracy and its shift robustness.