Causal Bandits: Learning Good Interventions via Causal Inference
Finnian Lattimore · Tor Lattimore · Mark Reid
2016 Poster
Abstract
We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information.
Chat is not available.
Successful Page Load