Timezone: »
Recommendation techniques are important approaches for alleviating information overload. Being often trained on implicit user feedback, many recommenders suffer from the sparsity challenge due to the lack of explicitly negative samples. The GAN-style recommenders (i.e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective. However, producing samples from the generator is very time-consuming, and our empirical study shows that the discriminator performs poor in top-k item recommendation. To this end, a theoretical analysis is made for the GAN-style algorithms, showing that the generator of limit capacity is diverged from the optimal generator. This may interpret the limitation of discriminator's performance. Based on these findings, we propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR). In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling; the efficiency of sample generation is improved with a sampling-decomposable generator, such that each sample can be generated in O(1) with the Vose-Alias method. Interestingly, due to decomposability of sampling, the generator can be optimized with the closed-form solutions in an alternating manner, being different from policy gradient in the GAN-style algorithms. We extensively evaluate the proposed algorithm with five real-world recommendation datasets. The results show that SD-GAR outperforms IRGAN by 12.4% and the SOTA recommender by 10% on average. Moreover, discriminator training can be 20x faster on the dataset with more than 120K items.
Author Information
Binbin Jin (University of Science and Technology of China)
Defu Lian (University of Science and Technology of China)
Zheng Liu (Microsoft)
Qi Liu (" University of Science and Technology of China, China")
Jianhui Ma (University of Science and Technology of China)
Xing Xie (Microsoft Research Asia)
Enhong Chen (University of Science and Technology of China)
More from the Same Authors
-
2021 Poster: GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph »
Junhan Yang · Zheng Liu · Shitao Xiao · Chaozhuo Li · Defu Lian · Sanjay Agrawal · Amit Singh · Guangzhong Sun · Xing Xie -
2021 Poster: Meta-learning with an Adaptive Task Scheduler »
Huaxiu Yao · Yu Wang · Ying Wei · Peilin Zhao · Mehrdad Mahdavi · Defu Lian · Chelsea Finn -
2021 Poster: Motif-based Graph Self-Supervised Learning for Molecular Property Prediction »
ZAIXI ZHANG · Qi Liu · Hao Wang · Chengqiang Lu · Chee-Kong Lee -
2020 Poster: Semi-Supervised Neural Architecture Search »
Renqian Luo · Xu Tan · Rui Wang · Tao Qin · Enhong Chen · Tie-Yan Liu -
2020 Poster: Incorporating BERT into Parallel Sequence Decoding with Adapters »
Junliang Guo · Zhirui Zhang · Linli Xu · Hao-Ran Wei · Boxing Chen · Enhong Chen -
2019 Poster: Efficient Pure Exploration in Adaptive Round Model »
Tianyuan Jin · Jieming SHI · Xiaokui Xiao · Enhong Chen -
2018 Poster: Neural Architecture Optimization »
Renqian Luo · Fei Tian · Tao Qin · Enhong Chen · Tie-Yan Liu -
2012 Poster: Image Denoising and Inpainting with Deep Neural Networks »
Junyuan Xie · Linli Xu · Enhong Chen