Timezone: »

Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions
Jiafan He · Dongruo Zhou · Tong Zhang · Quanquan Gu

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #533
We study the linear contextual bandit problem in the presence of adversarial corruption, where the reward at each round is corrupted by an adversary, and the corruption level (i.e., the sum of corruption magnitudes over the horizon) is $C\geq 0$. The best-known algorithms in this setting are limited in that they either are computationally inefficient or require a strong assumption on the corruption, or their regret is at least $C$ times worse than the regret without corruption. In this paper, to overcome these limitations, we propose a new algorithm based on the principle of optimism in the face of uncertainty. At the core of our algorithm is a weighted ridge regression where the weight of each chosen action depends on its confidence up to some threshold. We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds. Thus, our algorithm is nearly optimal up to logarithmic factors for both cases. Notably, our algorithm achieves the near-optimal regret for both corrupted and uncorrupted cases ($C=0$) simultaneously.

Author Information

Jiafan He (University of California, Los Angeles)
Dongruo Zhou (UCLA)
Tong Zhang (The Hong Kong University of Science and Technology)
Quanquan Gu (UCLA)

More from the Same Authors