Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap: from Machine Learning Research to Clinical Practice

Interpretable Data Analysis for Bench-to-Bedside Research

Zohreh Shams · Botty Dimanov · Nikola Simidjievski · Helena Andres-Terre · Paul Scherer · Urška Matjašec · Mateja Jamnik · Pietro Lió


Abstract:

Despite their state-of-art performance, the lack of explainability impedes the deployment of deep learning in day-to-day clinical practice. We propose REM, an explainable methodology for extracting rules from deep neural networks and combining them with rules from non-deep learning models. This allows integrating machine learning and reasoning for investigating basic and applied biological research questions. We evaluate the utility of REM in two cancer case studies and demonstrate that it can efficiently extract accurate and comprehensible rulesets from neural networks that can be readily integrated with rulesets obtained from tree-based approaches. REM provides explanation facilities for predictions and enables the clinicians to validate and calibrate the extracted rulesets with their domain knowledge. With these functionalities, REM caters for a novel and direct human-in-the-loop approach in clinical decision-making.

Chat is not available.