Timezone: »
Understanding the results of deep neural networks is an essential step towards wider acceptance of deep learning algorithms. Many approaches address the issue of interpreting artificial neural networks, but often provide divergent explanations. Moreover, different hyperparameters of an explanatory method can lead to conflicting interpretations. In this paper, we propose a technique for aggregating the feature attributions of different explanatory algorithms using a Restricted Boltzmann machine (RBM) to achieve a more accurate and robust interpretation of deep neural networks. Several challenging experiments on real-world datasets show that the proposed RBM method outperforms popular feature attribution methods and basic ensemble techniques.
Author Information
Vadim Borisov (The University of Tuebingen)
Johannes Meier (University of Tuebingen)
Johan Van den Heuvel (University of Tuebingen)
Hamed Jalali (University of Tuebingen)
Gjergji. Kasneci (University of Tuebingen)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Dates n/a. Room
More from the Same Authors
-
2021 : CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms »
Martin Pawelczyk · Sascha Bielawski · Johan Van den Heuvel · Tobias Richter · Gjergji. Kasneci -
2021 : Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes »
Hamed Jalali · Gjergji. Kasneci -
2021 : Poster Session 1 (gather.town) »
Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani