Timezone: »

Interpretable Data Analysis for Bench-to-Bedside Research
Zohreh Shams · Botty Dimanov · Nikola Simidjievski · Helena Andres-Terre · Paul Scherer · Urška Matjašec · Mateja Jamnik · Pietro Lió

Despite their state-of-art performance, the lack of explainability impedes the deployment of deep learning in day-to-day clinical practice. We propose REM, an explainable methodology for extracting rules from deep neural networks and combining them with rules from non-deep learning models. This allows integrating machine learning and reasoning for investigating basic and applied biological research questions. We evaluate the utility of REM in two cancer case studies and demonstrate that it can efficiently extract accurate and comprehensible rulesets from neural networks that can be readily integrated with rulesets obtained from tree-based approaches. REM provides explanation facilities for predictions and enables the clinicians to validate and calibrate the extracted rulesets with their domain knowledge. With these functionalities, REM caters for a novel and direct human-in-the-loop approach in clinical decision-making.

Author Information

Zohreh Shams (University of Cambridge)
Botty Dimanov (University of Cambridge)
Nikola Simidjievski (University of Cambridge)
Helena Andres-Terre (University of Cambridge)
Paul Scherer (University of Cambridge)
Urška Matjašec (University of Cambridge)
Mateja Jamnik (University of Cambridge)
Pietro Lió (University of Cambridge)

More from the Same Authors