Skip to yearly menu bar Skip to main content


Poster

Beyond task diversity: provable representation transfer for sequential multitask linear bandits

Thang Duong · Zhi Wang · Chicheng Zhang

West Ballroom A-D #6601
[ ] [ Project Page ]
[ Paper [ Slides [ Poster [ OpenReview
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks whose parameters lie in an m-dimensional subspace of Rd, thereby sharing a low-rank representation. Current literature typically assumes that the tasks are diverse, i.e., their parameters uniformly span the m-dimensional subspace. This assumption allows the low-rank representation to be learned before all tasks are revealed, which can be unrealistic in real-world applications. In this work, we present the first nontrivial result for sequential multi-task linear bandits without the task diversity assumption. We develop an algorithm that efficiently learns and transfers low-rank representations. When facing N tasks, each played over τ rounds, our algorithm achieves a regret guarantee of O~(Nmτ+N23τ23dm13+Nd2+τmd) under the ellipsoid action set assumption.This result can significantly improve upon the baseline of O~(Ndτ) that does not leverage the low-rank structure when the number of tasks N is sufficiently large and md. We also demonstrate empirically on synthetic data that our algorithm outperforms baseline algorithms, which rely on the task diversity assumption.

Chat is not available.