Skip to yearly menu bar Skip to main content


Learning to search efficiently for causally near-optimal treatments

Samuel HĂ„kansson · Viktor Lindblom · Omer Gottesman · Fredrik Johansson

Poster Session 4 #1265

Keywords: [ Online Learning ] [ Algorithms ] [ Reinforcement Learning ] [ Reinforcement Learning and Planning ]


Finding an effective medical treatment often requires a search by trial and error. Making this search more efficient by minimizing the number of unnecessary trials could lower both costs and patient suffering. We formalize this problem as learning a policy for finding a near-optimal treatment in a minimum number of trials using a causal inference framework. We give a model-based dynamic programming algorithm which learns from observational data while being robust to unmeasured confounding. To reduce time complexity, we suggest a greedy algorithm which bounds the near-optimality constraint. The methods are evaluated on synthetic and real-world healthcare data and compared to model-free reinforcement learning. We find that our methods compare favorably to the model-free baseline while offering a more transparent trade-off between search time and treatment efficacy.

Chat is not available.