TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning
George Konidaris · Scott Niekum · Philip Thomas

Tue Dec 13th 05:45 -- 11:59 PM @ None #None

We show that the lambda-return target used in the TD(lambda) family of algorithms is the maximum likelihood estimator for a specific model of how the variance of an n-step return estimate increases with n. We introduce the gamma-return estimator, an alternative target based on a more accurate model of variance, which defines the TDgamma family of complex-backup temporal difference learning algorithms. We derive TDgamma, the gamma-return equivalent of the original TD(lambda) algorithm, which eliminates the lambda parameter but can only perform updates at the end of an episode and requires time and space proportional to the episode length. We then derive a second algorithm, TDgamma(C), with a capacity parameter C. TDgamma(C) requires C times more time and memory than TD(lambda) and is incremental and online. We show that TDgamma outperforms TD(lambda) for any setting of lambda on 4 out of 5 benchmark domains, and that TDgamma(C) performs as well as or better than TD_gamma for intermediate settings of C.

Author Information

George Konidaris (Duke University)
Scott Niekum (University of Massachusetts Amherst)
Philip Thomas (University of Massachusetts Amherst)

More from the Same Authors