Estimation and Inference in Distributional Reinforcement Learning
Abstract
In this paper, we study distributional reinforcement learning from the perspective of statistical efficiency. We investigate distributional policy evaluation, aiming to estimate the complete return distribution (denoted \eta^\pi) attained by a given policy \pi. We use the certainty-equivalence method to construct our estimator \hat\eta^\pin, based on a generative model. In this circumstance we need a dataset of size \widetilde O(|\mathcal{S}||\mathcal{A}|{\varepsilon^{-2p}(1-\gamma)^{-(2p+2)}}) to guarantee the supremum p-Wasserstein metric between \hat\eta^\pin and \eta^\pi less than \varepsilon with high probability. This implies the distributional policy evaluation problem can be solved with sample efficiency. Also, we show that under different mild assumptions a dataset of size \widetilde O({|\mathcal{S}||\mathcal{A}|}{\varepsilon^{-2}(1-\gamma)^{-4}}) suffices to ensure the supremum Kolmogorov-Smirnov metric and supremum total variation metric between \hat\eta^\pin and \eta^\pi is below \varepsilon with high probability. Furthermore, we investigate the asymptotic behavior of \hat\eta^\pin. We demonstrate that the ``empirical process'' \sqrt{n}(\hat\eta^\pin-\eta^\pi) converges weakly to a Gaussian process in the space of bounded functionals on a Lipschitz function class \ell^\infty(\mathcal{F}{W1}), also in the space of bounded functionals on an indicator function class \ell^\infty(\mathcal{F}{\textup{KS}}) and a bounded measurable function class \ell^\infty(\mathcal{F}_{\textup{TV}}) when some mild conditions hold.