Timezone: »
In stochastic multi--armed bandits the objective is to solve the exploration--exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be intrinsically more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we introduce two new algorithms, we investigate their theoretical guarantees, and we report preliminary empirical results.
Author Information
Amir Sani (Centre d'Economie de la Sorbonne, CNRS)
Alessandro Lazaric (Facebook Artificial Intelligence Research)
Remi Munos (Google DeepMind)
More from the Same Authors
-
2017 Poster: Regret Minimization in MDPs with Options without Prior Knowledge »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric · Emma Brunskill -
2017 Poster: Efficient Second-Order Online Kernel Learning with Adaptive Embedding »
Daniele Calandriello · Alessandro Lazaric · Michal Valko -
2017 Spotlight: Regret Minimization in MDPs with Options without Prior Knowledge »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric · Emma Brunskill -
2015 Poster: Black-box optimization of noisy functions with unknown smoothness »
Jean-Bastien Grill · Michal Valko · Remi Munos · Remi Munos -
2014 Poster: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Active Regression by Stratification »
Sivan Sabato · Remi Munos -
2014 Poster: Best-Arm Identification in Linear Bandits »
Marta Soare · Alessandro Lazaric · Remi Munos -
2014 Poster: Bounded Regret for Finite-Armed Structured Bandits »
Tor Lattimore · Remi Munos -
2014 Poster: Efficient learning by implicit exploration in bandit problems with side observations »
Tomáš Kocák · Gergely Neu · Michal Valko · Remi Munos -
2014 Spotlight: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Optimistic Planning in Markov Decision Processes Using a Generative Model »
Balázs Szörényi · Gunnar Kedenburg · Remi Munos -
2014 Poster: Sparse Multi-Task Reinforcement Learning »
Daniele Calandriello · Alessandro Lazaric · Marcello Restelli -
2013 Workshop: Bayesian Optimization in Theory and Practice »
Matthew Hoffman · Jasper Snoek · Nando de Freitas · Michael A Osborne · Ryan Adams · Sebastien Bubeck · Philipp Hennig · Remi Munos · Andreas Krause -
2013 Poster: Thompson Sampling for 1-Dimensional Exponential Family Bandits »
Nathaniel Korda · Emilie Kaufmann · Remi Munos -
2013 Poster: Aggregating Optimistic Planning Trees for Solving Markov Decision Processes »
Gunnar Kedenburg · Raphael Fonteneau · Remi Munos -
2012 Poster: Bandit Algorithms boost Brain Computer Interfaces for motor-task selection of a brain-controlled button »
Joan Fruitet · Alexandra Carpentier · Remi Munos · Maureen Clerc -
2012 Poster: Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence »
Victor Gabillon · Mohammad Ghavamzadeh · Alessandro Lazaric -
2012 Poster: Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions »
Alexandra Carpentier · Remi Munos -
2011 Poster: Finite Time Analysis of Stratified Sampling for Monte Carlo »
Alexandra Carpentier · Remi Munos -
2011 Poster: Multi-Bandit Best Arm Identification »
Victor Gabillon · Mohammad Ghavamzadeh · Alessandro Lazaric · Sebastien Bubeck -
2011 Poster: Selecting the State-Representation in Reinforcement Learning »
Odalric-Ambrym Maillard · Remi Munos · Daniil Ryabko -
2011 Poster: Sparse Recovery with Brownian Sensing »
Alexandra Carpentier · Odalric-Ambrym Maillard · Remi Munos -
2011 Poster: Transfer from Multiple MDPs »
Alessandro Lazaric · Marcello Restelli -
2011 Session: Spotlight Session 2 »
Remi Munos -
2011 Session: Oral Session 1 »
Remi Munos -
2011 Poster: Optimistic Optimization of Deterministic Functions »
Remi Munos -
2011 Poster: Speedy Q-Learning »
Mohammad Gheshlaghi Azar · Remi Munos · Mohammad Ghavamzadeh · Hilbert J Kappen -
2010 Spotlight: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: Scrambled Objects for Least-Squares Regression »
Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: Error Propagation for Approximate Policy and Value Iteration »
Amir-massoud Farahmand · Remi Munos · Csaba Szepesvari -
2009 Poster: Sensitivity analysis in HMMs with application to likelihood maximization »
Pierre-Arnaud Coquelin · Romain Deguest · Remi Munos -
2009 Poster: Compressed Least-Squares Regression »
Odalric-Ambrym Maillard · Remi Munos -
2008 Poster: Online Optimization in X-Armed Bandits »
Sebastien Bubeck · Remi Munos · Gilles Stoltz · Csaba Szepesvari -
2008 Poster: Algorithms for Infinitely Many-Armed Bandits »
Yizao Wang · Jean-Yves Audibert · Remi Munos -
2008 Spotlight: Algorithms for Infinitely Many-Armed Bandits »
Yizao Wang · Jean-Yves Audibert · Remi Munos -
2008 Poster: Particle Filter-based Policy Gradient in POMDPs »
Pierre-Arnaud Coquelin · Romain Deguest · Remi Munos -
2007 Spotlight: Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods »
Alessandro Lazaric · Marcello Restelli · Andrea Bonarini -
2007 Poster: Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods »
Alessandro Lazaric · Marcello Restelli · Andrea Bonarini -
2007 Poster: Fitted Q-iteration in continuous action-space MDPs »
Remi Munos · András Antos · Csaba Szepesvari