Timezone: »
Matrix factorization (MF) collaborative filtering is an effective and widely used method in recommendation systems. However, the problem of finding an optimal trade-off between exploration and exploitation (otherwise known as the bandit problem), a crucial problem in collaborative filtering from cold-start, has not been previously addressed.In this paper, we present a novel algorithm for online MF recommendation that automatically combines finding the most relevantitems with exploring new or less-recommended items.Our approach, called Particle Thompson Sampling for Matrix-Factorization, is based on the general Thompson sampling framework, but augmented with a novel efficient online Bayesian probabilistic matrix factorization method based on the Rao-Blackwellized particle filter.Extensive experiments in collaborative filtering using several real-world datasets demonstrate that our proposed algorithm significantly outperforms the current state-of-the-arts.
Author Information
Jaya Kawale (Adobe Research)
Hung H Bui (Adobe Research)
Branislav Kveton (Adobe Research)
Long Tran-Thanh (University of Southampton)
Sanjay Chawla (Qatar Computing Research Institute, HBKU and University of Sydney)
More from the Same Authors
-
2019 Poster: Manipulating a Learning Defender and Ways to Counteract »
Jiarui Gan · Qingyu Guo · Long Tran-Thanh · Bo An · Michael Wooldridge -
2017 Poster: Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback »
Zheng Wen · Branislav Kveton · Michal Valko · Sharan Vaswani -
2015 Poster: Combinatorial Cascading Bandits »
Branislav Kveton · Zheng Wen · Azin Ashkan · Csaba Szepesvari