Timezone: »
AME (Almost Matching Exactly) is an interactive web-based application that allows users to perform matching for observational causal inference on large datasets. The AME application is powered by Fast Large-Scale Almost Matching Exactly (FLAME) (JMLR’21) and Dynamic Almost Matching Exactly (DAME) (AISTATS’19) algorithms that match treatment and control units in a way that is interpretable, because the matches are made directly on covariates, high-quality, because machine learning is used to determine which covariates are important to match on, and scalable, using techniques from data management. Our demonstration shows the usefulness of these algorithms and allows easy interactive explorations for treatment effect estimates and corresponding matched groups of units with a suite of visualizations providing detailed insights to users.
Author Information
Haoning Jiang (Duke University)
Thomas Howell (Duke University)
Neha Gupta (Duke University)
Vittorio Orlandi (Duke University)
Sudeepa Roy (Duke University, USA)
Marco Morucci (Duke University)
Harsh Parikh (Duke University)
Alexander Volfovsky (Duke University)
Cynthia Rudin (Duke)
More from the Same Authors
-
2022 : Making the World More Equal, One Ride at a Time: Studying Public Transportation Initiatives Using Interpretable Causal Inference »
Gaurav Rajesh Parikh · Albert Sun · Jenny Huang · Lesia Semenova · Cynthia Rudin -
2023 Poster: This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations »
Chiyu Ma · Brandon Zhao · Chaofan Chen · Cynthia Rudin -
2023 Poster: A Path to Simpler Models Starts With Noise »
Lesia Semenova · Harry Chen · Ronald Parr · Cynthia Rudin -
2023 Poster: The Rashomon Importance Distribution: Getting RID of Unstable, Single Model-based Variable Importance »
Jon Donnelly · Srikar Katta · Cynthia Rudin · Edward Browne -
2023 Poster: Experimental Designs for Heteroskedastic Variance »
Justin Weltz · Tanner Fiez · Alexander Volfovsky · Eric Laber · Blake Mason · houssam nassif · Lalit Jain -
2023 Poster: Exploring and Interacting with the Set of Good Sparse Generalized Additive Models »
Zhi Chen · Chudi Zhong · Margo Seltzer · Cynthia Rudin -
2023 Poster: OKRidge: Scalable Optimal k-Sparse Ridge Regression for Learning Dynamical Systems »
Jiachang Liu · Sam Rosen · Chudi Zhong · Cynthia Rudin -
2022 Panel: Panel 3A-2: Linear tree shap… & Exploring the Whole… »
peng yu · Cynthia Rudin -
2022 : Panel Discussion »
Cynthia Rudin · Dan Bohus · Brenna Argall · Alison Gopnik · Igor Mordatch · Samuel Kaski -
2022 : Let’s Give Domain Experts a Choice by Creating Many Approximately-Optimal Machine Learning Models »
Cynthia Rudin -
2022 Poster: Exploring the Whole Rashomon Set of Sparse Decision Trees »
Rui Xin · Chudi Zhong · Zhi Chen · Takuya Takagi · Margo Seltzer · Cynthia Rudin -
2022 Poster: Rethinking Nonlinear Instrumental Variable Models through Prediction Validity »
Chunxiao Li · Cynthia Rudin · Tyler H. McCormick -
2022 Poster: FasterRisk: Fast and Accurate Interpretable Risk Scores »
Jiachang Liu · Chudi Zhong · Boxuan Li · Margo Seltzer · Cynthia Rudin -
2020 : Contributed Talk - Cryo-ZSSR: multiple-image super-resolution based on deep internal learning »
Qinwen Huang · Reed Chen · Cynthia Rudin -
2020 Workshop: Self-Supervised Learning -- Theory and Practice »
Pengtao Xie · Shanghang Zhang · Pulkit Agrawal · Ishan Misra · Cynthia Rudin · Abdelrahman Mohamed · Wenzhen Yuan · Barret Zoph · Laurens van der Maaten · Xingyi Yang · Eric Xing -
2020 : How should researchers engage with controversial applications of AI? »
Logan Koepke · CATHERINE ONEIL · Tawana Petty · Cynthia Rudin · Deborah Raji · Shawn Bushway -
2020 Workshop: Fair AI in Finance »
Senthil Kumar · Cynthia Rudin · John Paisley · Isabelle Moulinier · C. Bayan Bruss · Eren K. · Susan Tibbs · Oluwatobi Olabiyi · Simona Gandrabur · Svitlana Vyetrenko · Kevin Compher -
2019 Poster: This Looks Like That: Deep Learning for Interpretable Image Recognition »
Chaofan Chen · Oscar Li · Daniel Tao · Alina Barnett · Cynthia Rudin · Jonathan K Su -
2019 Spotlight: This Looks Like That: Deep Learning for Interpretable Image Recognition »
Chaofan Chen · Oscar Li · Daniel Tao · Alina Barnett · Cynthia Rudin · Jonathan K Su -
2019 Poster: Optimal Sparse Decision Trees »
Xiyang Hu · Cynthia Rudin · Margo Seltzer -
2019 Spotlight: Optimal Sparse Decision Trees »
Xiyang Hu · Cynthia Rudin · Margo Seltzer -
2018 : Invited Talk 6: Is it possible to have interpretable models for AI in Finance? »
Cynthia Rudin -
2018 : Poster Session 1 (note there are numerous missing names here, all papers appear in all poster sessions) »
Akhilesh Gotmare · Kenneth Holstein · Jan Brabec · Michal Uricar · Kaleigh Clary · Cynthia Rudin · Sam Witty · Andrew Ross · Shayne O'Brien · Babak Esmaeili · Jessica Forde · Massimo Caccia · Ali Emami · Scott Jordan · Bronwyn Woods · D. Sculley · Rebekah Overdorf · Nicolas Le Roux · Peter Henderson · Brandon Yang · Tzu-Yu Liu · David Jensen · Niccolo Dalmasso · Weitang Liu · Paul Marc TRICHELAIR · Jun Ki Lee · Akanksha Atrey · Matt Groh · Yotam Hechtlinger · Emma Tosch -
2017 Workshop: From 'What If?' To 'What Next?' : Causal Inference and Machine Learning for Intelligent Decision Making »
Ricardo Silva · Panagiotis Toulis · John Shawe-Taylor · Alexander Volfovsky · Thorsten Joachims · Lihong Li · Nathan Kallus · Adith Swaminathan -
2017 : Introductions »
Panagiotis Toulis · Alexander Volfovsky