Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

Explainable medical image analysis by leveraging human-interpretable features through mutual information minimization

Erick M Cobos · Thomas Kuestner · Bernhard Schölkopf · Sergios Gatidis


Abstract:

Deep learning models used as computer-assisted diagnosis systems in a medical context achieve high accuracy in numerous tasks; however, explaining their predictions remains challenging. Notably in the medical domain, we aspire to have accurate models that can also provide explanations for their outcomes. In this work we propose a deep learning-based framework for medical image analysis that is inherently explainable while maintaining high prediction accuracy. To this end, we introduce a hybrid approach which uses human-interpretable as well as machine-learned features while minimizing their mutual information. Using images of skin lesions we empirically show that our approach achieves human-level performance while being intrinsically interpretable.

Chat is not available.