Timezone: »
Deep models trained through maximum likelihood have achieved state-of-the-art results for survival analysis. Despite this training scheme, practitioners evaluate models under other criteria, such as binary classification losses at a chosen set of time horizons, e.g. Brier score (BS) and Bernoulli log likelihood (BLL). Models trained with maximum likelihood may have poor BS or BLL since maximum likelihood does not directly optimize these criteria. Directly optimizing criteria like BS requires inverse-weighting by the censoring distribution. However, estimating the censoring model under these metrics requires inverse-weighting by the failure distribution. The objective for each model requires the other, but neither are known. To resolve this dilemma, we introduce Inverse-Weighted Survival Games. In these games, objectives for each model are built from re-weighted estimates featuring the other model, where the latter is held fixed during training. When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point. This means models in the game do not leave the correct distributions once reached. We construct one case where this stationary point is unique. We show that these games optimize BS on simulations and then apply these principles on real world cancer and critically-ill patient data.
Author Information
Xintian Han (New York University)
Mark Goldstein (New York University)
Aahlad Puli (NYU)
Thomas Wies (New York University)
Adler Perotte (Columbia University)
Rajesh Ranganath (New York University)
More from the Same Authors
-
2021 Spotlight: Offline RL Without Off-Policy Evaluation »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2021 : Learning Invariant Representations with Missing Data »
Mark Goldstein · Adriel Saporta · Aahlad Puli · Rajesh Ranganath · Andrew Miller -
2021 : Learning to Accelerate MR Screenings »
Raghav Singhal · Mukund Sudarshan · Angela Tong · Daniel Sodickson · Rajesh Ranganath -
2021 : Individual treatment effect estimation in the presence of unobserved confounding based on a fixed relative treatment effect »
Wouter van Amsterdam · Rajesh Ranganath -
2021 : Quantile Filtered Imitation Learning »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2021 Poster: Offline RL Without Off-Policy Evaluation »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2020 Poster: Deep Direct Likelihood Knockoffs »
Mukund Sudarshan · Wesley Tansey · Rajesh Ranganath -
2020 Poster: General Control Functions for Causal Effect Estimation from IVs »
Aahlad Puli · Rajesh Ranganath -
2020 Poster: X-CAL: Explicit Calibration for Survival Analysis »
Mark Goldstein · Xintian Han · Aahlad Puli · Adler Perotte · Rajesh Ranganath -
2020 Poster: Causal Estimation with Functional Confounders »
Aahlad Puli · Adler Perotte · Rajesh Ranganath -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 Poster: Energy-Inspired Models: Learning with Sampler-Induced Distributions »
Dieterich Lawson · George Tucker · Bo Dai · Rajesh Ranganath -
2018 Poster: Removing Hidden Confounding by Experimental Grounding »
Nathan Kallus · Aahlad Puli · Uri Shalit -
2018 Spotlight: Removing Hidden Confounding by Experimental Grounding »
Nathan Kallus · Aahlad Puli · Uri Shalit