Skip to yearly menu bar Skip to main content


Workshop

Interpreting, Explaining and Visualizing Deep Learning - Now what ?

Klaus-Robert Müller · Andrea Vedaldi · Lars K Hansen · Wojciech Samek · Grégoire Montavon

Hyatt Hotel, Regency Ballroom A+B+C

Machine learning has become an indispensable tool for a number of tasks ranging from the detection of objects in images to the understanding of natural languages. While these models reach impressively high predictive accuracy, they are often perceived as black-boxes, and it is not clear what information in the input data is used for predicting. In sensitive applications such as medical diagnosis or self-driving cars, where a single incorrect prediction can be very costly, the reliance of the model on the right features must be guaranteed. This indeed lowers the risk that the model behaves erroneously in presence of novel factors of variation in the test data. Furthermore, interpretability is instrumental when applying machine learning to the sciences, as the detailed understanding of the trained model (e.g., what features it uses to capture the complex relations between physical or biological variables) is a prerequisite for building meaningful new scientific hypotheses. Without such understanding and the possibility of verification that the model has learned something meaningful (e.g. obeying the known physical or biological laws), even the best predictor is of no use for scientific purposes. Finally, also from the perspective of a deep learning engineer, being able to visualize what the model has (or has not) learned is valuable as it allows to improve current models by e.g. identifying biases in the data or the training procedure, or by comparing the strengths and weaknesses of different architectures.

Not surprisingly, the problem of visualizing and understanding neural networks has recently received a lot of attention in the community. Various techniques for interpreting deep neural networks have been proposed and several workshops have been organized on related topics. However, the theoretical foundations of the interpretability problem are yet to be investigated and the usefulness of the proposed methods in practice still needs to be demonstrated.

Our NIPS 2017 Workshop “Interpreting, Explaining and Visualizing Deep Learning – Now what?” aims to review recent techniques and establish new theoretical foundations for interpreting and understanding deep learning models. However, it will not stop at the methodological level, but also address the “now what?” question. This strong focus on the applications of interpretable methods in deep learning distinguishes this workshop from previous events as we aim to take the next step by exploring and extending the practical usefulness of Interpreting, Explaining and Visualizing in Deep Learning. Also with this workshop we aim to identify new fields of applications for interpretable deep learning. Since the workshop will host invited speakers from various application domains (computer vision, NLP, neuroscience, medicine), it will provide an opportunity for participants to learn from each other and initiate new interdisciplinary collaborations. The workshop will contain invited research talks, short methods and applications talks, a poster and demonstration session and a panel discussion. A selection of accepted papers together with the invited contributions will be published in an edited book by Springer LNCS in order to provide a representative overview of recent activities in this emerging research field.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content