Poster
Efficient Algorithms for Generalized Linear Bandits with Heavy-tailed Rewards
Bo Xue · Yimu Wang · Yuanyu Wan · Jinfeng Yi · Lijun Zhang
Great Hall & Hall B1+B2 (level 1) #1902
Abstract:
This paper investigates the problem of generalized linear bandits with heavy-tailed rewards, whose -th moment is bounded for some . Although there exist methods for generalized linear bandits, most of them focus on bounded or sub-Gaussian rewards and are not well-suited for many real-world scenarios, such as financial markets and web-advertising. To address this issue, we propose two novel algorithms based on truncation and mean of medians. These algorithms achieve an almost optimal regret bound of , where is the dimension of contextual information and is the time horizon. Our truncation-based algorithm supports online learning, distinguishing it from existing truncation-based approaches. Additionally, our mean-of-medians-based algorithm requires only rewards and one estimator per epoch, making it more practical. Moreover, our algorithms improve the regret bounds by a logarithmic factor compared to existing algorithms when . Numerical experimental results confirm the merits of our algorithms.
Chat is not available.