Skip to yearly menu bar Skip to main content


Poster

Efficient Algorithms for Generalized Linear Bandits with Heavy-tailed Rewards

Bo Xue · Yimu Wang · Yuanyu Wan · Jinfeng Yi · Lijun Zhang

Great Hall & Hall B1+B2 (level 1) #1902

Abstract: This paper investigates the problem of generalized linear bandits with heavy-tailed rewards, whose (1+ϵ)-th moment is bounded for some ϵ(0,1]. Although there exist methods for generalized linear bandits, most of them focus on bounded or sub-Gaussian rewards and are not well-suited for many real-world scenarios, such as financial markets and web-advertising. To address this issue, we propose two novel algorithms based on truncation and mean of medians. These algorithms achieve an almost optimal regret bound of O~(dT11+ϵ), where d is the dimension of contextual information and T is the time horizon. Our truncation-based algorithm supports online learning, distinguishing it from existing truncation-based approaches. Additionally, our mean-of-medians-based algorithm requires only O(logT) rewards and one estimator per epoch, making it more practical. Moreover, our algorithms improve the regret bounds by a logarithmic factor compared to existing algorithms when ϵ=1. Numerical experimental results confirm the merits of our algorithms.

Chat is not available.