Poster
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
Long-Fei Li · Yu-Jie Zhang · Peng Zhao · Zhi-Hua Zhou
West Ballroom A-D #6403
Abstract:
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space. Despite its significant benefits, incorporating the non-linear function raises substantial challenges in both *statistical* and *computational* efficiency. The best-known result of Hwang and Oh [2023] has achieved an regret upper bound, where is a problem-dependent quantity, is the feature dimension, is the episode length, and is the number of episodes. However, we observe that exhibits polynomial dependence on the number of reachable states, which can be as large as the state space size in the worst case and thus undermines the motivation for function approximation. Additionally, their method requires storing all historical data and the time complexity scales linearly with the episode count, which is computationally expensive. In this work, we propose a statistically efficient algorithm that achieves a regret of , eliminating the dependence on in the dominant term for the first time. We then address the computational challenges by introducing an enhanced algorithm that achieves the same regret guarantee but with only constant cost. Finally, we establish the first lower bound for this problem, justifying the optimality of our results in and .
Chat is not available.