Additive MIL: Intrinsically Interpretable Multiple Instance Learning for Pathology

Syed Ashar Javed · Dinkar Juyal · Harshith Padigela · Amaro Taylor-Weiner · Limin Yu · Aaditya Prakash

Hall J #119

Keywords: [ Explainable AI ] [ Medical Imaging ] [ Multiple Instance Learning ] [ Histopathology ] [ explainability ] [ Additive Models ] [ saliency ] [ interpretability ] [ Shapley values ] [ Digital Pathology ]

[ Abstract ]
[ Paper [ Poster [ OpenReview
Tue 29 Nov 2 p.m. PST — 4 p.m. PST


Multiple Instance Learning (MIL) has been widely applied in pathology towards solving critical problems such as automating cancer diagnosis and grading, predicting patient prognosis, and therapy response. Deploying these models in a clinical setting requires careful inspection of these black boxes during development and deployment to identify failures and maintain physician trust. In this work, we propose a simple formulation of MIL models, which enables interpretability while maintaining similar predictive performance. Our Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized. We show that our spatial credit assignment coincides with regions used by pathologists during diagnosis and improves upon classical attention heatmaps from attention MIL models. We show that any existing MIL model can be made additive with a simple change in function composition. We also show how these models can debug model failures, identify spurious features, and highlight class-wise regions of interest, enabling their use in high-stakes environments such as clinical decision-making.

Chat is not available.