Timezone: »
While using shaped rewards can be beneficial when solving sparse reward tasks, their successful application often requires careful engineering and is problem specific. For instance, in tasks where the agent must achieve some goal state, simple distance-to-goal reward shaping often fails, as it renders learning vulnerable to local optima. We introduce a simple and effective model-free method to learn from shaped distance-to-goal rewards on tasks where success depends on reaching a goal state. Our method introduces an auxiliary distance-based reward based on pairs of rollouts to encourage diverse exploration. This approach effectively prevents learning dynamics from stabilizing around local optima induced by the naive distance-to-goal reward shaping and enables policies to efficiently solve sparse reward tasks. Our augmented objective does not require any additional reward engineering or domain expertise to implement and converges to the original sparse objective as the agent learns to solve the task. We demonstrate that our method successfully solves a variety of hard-exploration tasks (including maze navigation and 3D construction in a Minecraft environment), where naive distance-based reward shaping otherwise fails, and intrinsic curiosity and reward relabeling strategies exhibit poor performance.
Author Information
Alexander Trott (Salesforce Research)
Stephan Zheng (Salesforce)
Caiming Xiong (Salesforce)
Richard Socher (Salesforce)
Richard Socher is Chief Scientist at Salesforce. He leads the company’s research efforts and brings state of the art artificial intelligence solutions into the platform. Prior, Richard was an adjunct professor at the Stanford Computer Science Department and the CEO and founder of MetaMind, a startup acquired by Salesforce in April 2016. MetaMind’s deep learning AI platform analyzes, labels and makes predictions on image and text data so businesses can make smarter, faster and more accurate decisions.
More from the Same Authors
-
2021 Poster: Evaluating State-of-the-Art Classification Models Against Bayes Optimality »
Ryan Theisen · Huan Wang · Lav Varshney · Caiming Xiong · Richard Socher -
2020 : Contributed Talk - ProGen: Language Modeling for Protein Generation »
Ali Madani · Bryan McCann · Nikhil Naik · · Possu Huang · Richard Socher -
2020 Workshop: Machine Learning for Economic Policy »
Stephan Zheng · Alexander Trott · Annie Liang · Jamie Morgenstern · David Parkes · Nika Haghtalab -
2020 Poster: Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in Deep Learning »
Pan Zhou · Jiashi Feng · Chao Ma · Caiming Xiong · Steven Chu Hong Hoi · Weinan E -
2020 Poster: Theory-Inspired Path-Regularized Differential Network Architecture Search »
Pan Zhou · Caiming Xiong · Richard Socher · Steven Chu Hong Hoi -
2020 Oral: Theory-Inspired Path-Regularized Differential Network Architecture Search »
Pan Zhou · Caiming Xiong · Richard Socher · Steven Chu Hong Hoi -
2020 Poster: Online Structured Meta-learning »
Huaxiu Yao · Yingbo Zhou · Mehrdad Mahdavi · Zhenhui (Jessie) Li · Richard Socher · Caiming Xiong -
2020 Poster: Towards Understanding Hierarchical Learning: Benefits of Neural Representations »
Minshuo Chen · Yu Bai · Jason Lee · Tuo Zhao · Huan Wang · Caiming Xiong · Richard Socher -
2020 Poster: A Simple Language Model for Task-Oriented Dialogue »
Ehsan Hosseini-Asl · Bryan McCann · Chien-Sheng Wu · Semih Yavuz · Richard Socher -
2020 Spotlight: A Simple Language Model for Task-Oriented Dialogue »
Ehsan Hosseini-Asl · Bryan McCann · Chien-Sheng Wu · Semih Yavuz · Richard Socher -
2019 Poster: LiteEval: A Coarse-to-Fine Framework for Resource Efficient Video Recognition »
Zuxuan Wu · Caiming Xiong · Yu-Gang Jiang · Larry Davis -
2019 Poster: NAOMI: Non-Autoregressive Multiresolution Sequence Imputation »
Yukai Liu · Rose Yu · Stephan Zheng · Eric Zhan · Yisong Yue -
2017 : Contributed Talks 1 »
Cinjon Resnick · Ying Wen · Stephan Zheng · Mukul Bhutani · Edward Choi -
2017 Poster: Learned in Translation: Contextualized Word Vectors »
Bryan McCann · James Bradbury · Caiming Xiong · Richard Socher -
2016 : Richard Socher - Tackling the Limits of Deep Learning for NLP »
Richard Socher -
2016 Poster: Generating Long-term Trajectories Using Deep Hierarchical Networks »
Stephan Zheng · Yisong Yue · Patrick Lucey -
2014 Poster: Global Belief Recursive Neural Networks »
Romain Paulus · Richard Socher · Christopher Manning -
2013 Demonstration: Easy Text Classification with Machine Learning »
Richard Socher · Romain Paulus · Bryan McCann · Andrew Y Ng -
2013 Poster: Reasoning With Neural Tensor Networks for Knowledge Base Completion »
Richard Socher · Danqi Chen · Christopher D Manning · Andrew Y Ng -
2013 Poster: Zero-Shot Learning Through Cross-Modal Transfer »
Richard Socher · Milind Ganjoo · Christopher D Manning · Andrew Y Ng -
2012 Poster: Recursive Deep Learning on 3D Point Clouds »
Richard Socher · Bharath Bath · Brody Huval · Christopher D Manning · Andrew Y Ng -
2011 Poster: Unfolding Recursive Autoencoders for Paraphrase Detection »
Richard Socher · Eric H Huang · Jeffrey Pennin · Andrew Y Ng · Christopher D Manning -
2009 Poster: A Bayesian Analysis of Dynamics in Free Recall »
Richard Socher · Samuel J Gershman · Adler Perotte · Per Sederberg · David Blei · Kenneth Norman