Skip to yearly menu bar Skip to main content


Poster

Cold-Start Reinforcement Learning with Softmax Policy Gradient

Nan Ding · Radu Soricut

Keywords: [ Recurrent Networks ] [ Attention Models ] [ Natural Language Processing ] [ Reinforcement Learning ]

[ ]
[ Paper
2017 Poster

Abstract:

Policy-gradient approaches to reinforcement learning have two common and undesirable overhead procedures, namely warm-start training and sample variance reduction. In this paper, we describe a reinforcement learning method based on a softmax value function that requires neither of these procedures. Our method combines the advantages of policy-gradient methods with the efficiency and simplicity of maximum-likelihood approaches. We apply this new cold-start reinforcement learning method in training sequence generation models for structured output prediction problems. Empirical evidence validates this method on automatic summarization and image captioning tasks.

Chat is not available.