Skip to yearly menu bar Skip to main content


Poster

What went wrong and when? Instance-wise feature importance for time-series black-box models

Sana Tonekaboni · Shalmali Joshi · Kieran Campbell · David Duvenaud · Anna Goldenberg

Poster Session 3 #1069

Abstract:

Explanations of time series models are useful for high stakes applications like healthcare but have received little attention in machine learning literature. We propose FIT, a framework that evaluates the importance of observations for a multivariate time-series black-box model by quantifying the shift in the predictive distribution over time. FIT defines the importance of an observation based on its contribution to the distributional shift under a KL-divergence that contrasts the predictive distribution against a counterfactual where the rest of the features are unobserved. We also demonstrate the need to control for time-dependent distribution shifts. We compare with state-of-the-art baselines on simulated and real-world clinical data and demonstrate that our approach is superior in identifying important time points and observations throughout the time series.

Chat is not available.