Follow-the-Perturbed-Leader for Decoupled Bandits: Best-of-Both-Worlds and Practicality
Chaiwon Kim · Jongyeong Lee · Min-hwan Oh
Abstract
We study the decoupled multi-armed bandit (MAB) problem, where the learner selects one arm for exploration and one arm for exploitation in each round. The loss of the explored arm is observed but not counted, while the loss of the exploited arm is incurred without being observed. We propose a policy within the Follow-the-Perturbed-Leader (FTPL) framework using Pareto perturbations. Our policy achieves (near-)optimal regret regardless of the environment, i.e., Best-of-Both-Worlds (BOBW): constant regret in the stochastic regime, improving upon the optimal bound of the standard MABs, and minimax optimal regret in the adversarial regime. Moreover, our policy avoids both the optimization step required by the previous BOBW policy, Decoupled-Tsallis-INF [Rouyer and Seldin, 2020], and the resampling step that is typically necessary in FTPL. Consequently, it achieves substantial computational improvement, about $20$ times faster than Decoupled-Tsallis-INF, while demonstrating better empirical performance in both regimes.
Chat is not available.
Successful Page Load