Skip to yearly menu bar Skip to main content


Poster

Community Exploration: From Offline Optimization to Online Learning

Xiaowei Chen · Weiran Huang · Wei Chen · John C. S. Lui

Room 517 AB #153

Keywords: [ Adaptive Data Analysis ] [ Bandit Algorithms ]


Abstract:

We introduce the community exploration problem that has various real-world applications such as online advertising. In the problem, an explorer allocates limited budget to explore communities so as to maximize the number of members he could meet. We provide a systematic study of the community exploration problem, from offline optimization to online learning. For the offline setting where the sizes of communities are known, we prove that the greedy methods for both of non-adaptive exploration and adaptive exploration are optimal. For the online setting where the sizes of communities are not known and need to be learned from the multi-round explorations, we propose an ``upper confidence'' like algorithm that achieves the logarithmic regret bounds. By combining the feedback from different rounds, we can achieve a constant regret bound.

Live content is unavailable. Log in and register to view live content