Timezone: »
Observational studies often seek to infer the causal effect of a treatment even though both the assigned treatment and the outcome depend on other confounding variables. An effective strategy for dealing with confounders is to estimate a propensity model that corrects for the relationship between covariates and assigned treatment. Unfortunately, the confounding variables themselves are not always observed, in which case we can only bound the propensity, and therefore bound the magnitude of causal effects. In many important cases, like administering a dose of some medicine, the possible treatments belong to a continuum. Sensitivity models, which are required to tie the true propensity to something that can be estimated, have been explored for binary treatments. We propose one for continuous treatments. We develop a framework to compute ignorance intervals on the partially identified dose-response curves, enabling us to quantify the susceptibility of an inference to hidden confounders. We show with real-world observational studies that our approach can give non-trivial bounds on causal effects from continuous treatments in the presence of hidden confounders.
Author Information
Myrl Marmarelis (USC Information Sciences Institute)
I come up with better ways to make sense of causal models in machine learning. Currently, I am spearheading a cross-disciplinary collaboration on single-cell transcriptomics with clinical relevance. We are tackling the high dimensionality of this increasingly popular sensing modality. In another multi-university initiative, I am in charge of forecasting the outcomes of complex scenarios in environments with extreme uncertainty.
Greg Ver Steeg (USC Information Sciences Institute)
Neda Jahanshad (University of Southern California)
Aram Galstyan (USC Information Sciences Institute)
More from the Same Authors
-
2022 : Federated Progressive Sparsification (Purge-Merge-Tune)+ »
Dimitris Stripelis · Umang Gupta · Greg Ver Steeg · Jose-Luis Ambite -
2021 Poster: Information-theoretic generalization bounds for black-box learning algorithms »
Hrayr Harutyunyan · Maxim Raginsky · Greg Ver Steeg · Aram Galstyan -
2021 Poster: Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling »
Greg Ver Steeg · Aram Galstyan -
2021 Poster: Implicit SVD for Graph Representation Learning »
Sami Abu-El-Haija · Hesham Mostafa · Marcel Nassar · Valentino Crespi · Greg Ver Steeg · Aram Galstyan -
2020 Workshop: Deep Learning through Information Geometry »
Pratik Chaudhari · Alexander Alemi · Varun Jog · Dhagash Mehta · Frank Nielsen · Stefano Soatto · Greg Ver Steeg -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 Poster: Fast structure learning with modular regularization »
Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan -
2019 Spotlight: Fast structure learning with modular regularization »
Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan -
2019 Poster: Exact Rate-Distortion in Autoencoders via Echo Noise »
Rob Brekelmans · Daniel Moyer · Aram Galstyan · Greg Ver Steeg -
2018 Poster: Invariant Representations without Adversarial Training »
Daniel Moyer · Shuyang Gao · Rob Brekelmans · Aram Galstyan · Greg Ver Steeg -
2017 : Coffee break and Poster Session II »
Mohamed Kane · Albert Haque · Vagelis Papalexakis · John Guibas · Peter Li · Carlos Arias · Eric Nalisnick · Padhraic Smyth · Frank Rudzicz · Xia Zhu · Theodore Willke · Noemie Elhadad · Hans Raffauf · Harini Suresh · Paroma Varma · Yisong Yue · Ognjen (Oggi) Rudovic · Luca Foschini · Syed Rameel Ahmad · Hasham ul Haq · Valerio Maggio · Giuseppe Jurman · Sonali Parbhoo · Pouya Bashivan · Jyoti Islam · Mirco Musolesi · Chris Wu · Alexander Ratner · Jared Dunnmon · Cristóbal Esteban · Aram Galstyan · Greg Ver Steeg · Hrant Khachatrian · Marc Górriz · Mihaela van der Schaar · Anton Nemchenko · Manasi Patwardhan · Tanay Tandon -
2016 Poster: Variational Information Maximization for Feature Selection »
Shuyang Gao · Greg Ver Steeg · Aram Galstyan -
2014 Poster: Discovering Structure in High-Dimensional Data Through Correlation Explanation »
Greg Ver Steeg · Aram Galstyan -
2011 Poster: Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs »
Armen Allahverdyan · Aram Galstyan