Timezone: »
Given an unexpected change in the output metric of a large-scale system, it is important to answer why the change occurred: which inputs caused the change in metric? A key component of such an attribution question is estimating the counterfactual: the (hypothetical) change in the system metric due to a specified change in a single input. However, due to inherent stochasticity and complex interactions between parts of the system, it is difficult to model an output metric directly. We utilize the computational structure of a system to break up the modelling task into sub-parts, such that each sub-part corresponds to a more stable mechanism that can be modelled accurately over time. Using the system's structure also helps to view the metric as a computation over a structural causal model (SCM), thus providing a principled way to estimate counterfactuals. Specifically, we propose a method to estimate counterfactuals using time-series predictive models and construct an attribution score, CF-Shapley, that is consistent with desirable axioms for attributing an observed change in the output metric. Unlike past work on causal shapley values, our proposed method can attribute a single observed change in output (rather than a population-level effect) and thus provides more accurate attribution scores when evaluated on simulated datasets. As a real-world application, we analyze a query-ad matching system with the goal of attributing observed change in a metric for ad matching density. Attribution scores explain how query volume and ad demand from different query categories affect the ad matching density, uncovering the role of external events (e.g., "Cheetah Day") in driving the matching density.
Author Information
Amit Sharma (Microsoft Research)
Hua Li (Peking University)
Jian Jiao (Microsoft)
More from the Same Authors
-
2022 Poster: Probing Classifiers are Unreliable for Concept Removal and Detection »
Abhinav Kumar · Chenhao Tan · Amit Sharma -
2022 : Using Interventions to Improve Out-of-Distribution Generalization of Text-Matching Systems »
Parikshit Bansal · Yashoteja Prabhu · Emre Kiciman · Amit Sharma -
2022 : Using Interventions to Improve Out-of-Distribution Generalization of Text-Matching Systems »
Parikshit Bansal · Yashoteja Prabhu · Emre Kiciman · Amit Sharma -
2022 : A Causal AI Suite for Decision-Making »
Emre Kiciman · Eleanor Dillon · Darren Edge · Adam Foster · Joel Jennings · Chao Ma · Robert Ness · Nick Pawlowski · Amit Sharma · Cheng Zhang -
2022 : Deep End-to-end Causal Inference »
Tomas Geffner · Javier AntorĂ¡n · Adam Foster · Wenbo Gong · Chao Ma · Emre Kiciman · Amit Sharma · Angus Lamb · Martin Kukla · Nick Pawlowski · Miltiadis Allamanis · Cheng Zhang -
2022 : Counterfactual Generation Under Confounding »
Abbavaram Gowtham Reddy · Saloni Dash · Amit Sharma · Vineeth N Balasubramanian -
2022 Spotlight: Probing Classifiers are Unreliable for Concept Removal and Detection »
Abhinav Kumar · Chenhao Tan · Amit Sharma -
2022 Spotlight: Lightning Talks 1B-1 »
Qitian Wu · Runlin Lei · Rongqin Chen · Luca Pinchetti · Yangze Zhou · Abhinav Kumar · Hans Hao-Hsun Hsu · Wentao Zhao · Chenhao Tan · Zhen Wang · Shenghui Zhang · Yuesong Shen · Tommaso Salvatori · Gitta Kutyniok · Zenan Li · Amit Sharma · Leong Hou U · Yordan Yordanov · Christian Tomani · Bruno Ribeiro · Yaliang Li · David P Wipf · Daniel Cremers · Bolin Ding · Beren Millidge · Ye Li · Yuhang Song · Junchi Yan · Zhewei Wei · Thomas Lukasiewicz