Poster
Concrete Dropout
Yarin Gal · Jiri Hron · Alex Kendall

Wed Dec 6th 06:30 -- 10:30 PM @ Pacific Ballroom #177 #None

Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary—a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout’s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.

Author Information

Yarin Gal (University of Oxford)
Jiri Hron (University of Cambridge)
Alex Kendall (University of Cambridge)

More from the Same Authors