Timezone: »
Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss. For efficiently training recommender retrievers on modern hardwares, inbatch sampling, where the items in the mini-batch are shared as negatives to estimate the softmax function, has attained growing interest. However, existing inbatch sampling based strategies just correct the sampling bias of inbatch items with item frequency, being unable to distinguish the user queries within the mini-batch and still incurring significant bias from the softmax. In this paper, we propose a Cache-Augmented Inbatch Importance Resampling (XIR) for training recommender retrievers, which not only offers different negatives to user queries with inbatch items, but also adaptively achieves a more accurate estimation of the softmax distribution. Specifically, XIR resamples items from the given mini-batch training pairs based on certain probabilities, where a cache with more frequently sampled items is adopted to augment the candidate item set, with the purpose of reusing the historical informative samples. XIR enables to sample query-dependent negatives based on inbatch items and to capture dynamic changes of model training, which leads to a better approximation of the softmax and further contributes to better convergence. Finally, we conduct experiments to validate the superior performance of the proposed XIR compared with competitive approaches.
Author Information
Jin Chen (University of Electronic Science and Technology of China)
Defu Lian (University of Science and Technology of China)
Yucheng Li
Baoyun Wang
Kai Zheng (University of Electronic Science and Technology of China)
Enhong Chen (University of Science and Technology of China)
More from the Same Authors
-
2022 Poster: DARE: Disentanglement-Augmented Rationale Extraction »
Linan Yue · Qi Liu · Yichao Du · Yanqing An · Li Wang · Enhong Chen -
2022 Spotlight: Lightning Talks 5B-4 »
Yuezhi Yang · Zeyu Yang · Yong Lin · Yishi Xu · Linan Yue · Tao Yang · Weixin Chen · Qi Liu · Jiaqi Chen · Dongsheng Wang · Baoyuan Wu · Yuwang Wang · Hao Pan · Shengyu Zhu · Zhenwei Miao · Yan Lu · Lu Tan · Bo Chen · Yichao Du · Haoqian Wang · Wei Li · Yanqing An · Ruiying Lu · Peng Cui · Nanning Zheng · Li Wang · Zhibin Duan · Xiatian Zhu · Mingyuan Zhou · Enhong Chen · Li Zhang -
2022 Spotlight: DARE: Disentanglement-Augmented Rationale Extraction »
Linan Yue · Qi Liu · Yichao Du · Yanqing An · Li Wang · Enhong Chen -
2022 Poster: Graph Convolution Network based Recommender Systems: Learning Guarantee and Item Mixture Powered Strategy »
Leyan Deng · Defu Lian · Chenwang Wu · Enhong Chen -
2022 Poster: Recommender Forest for Efficient Retrieval »
Chao Feng · Wuchao Li · Defu Lian · Zheng Liu · Enhong Chen -
2021 Poster: GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph »
Junhan Yang · Zheng Liu · Shitao Xiao · Chaozhuo Li · Defu Lian · Sanjay Agrawal · Amit Singh · Guangzhong Sun · Xing Xie -
2021 Poster: Meta-learning with an Adaptive Task Scheduler »
Huaxiu Yao · Yu Wang · Ying Wei · Peilin Zhao · Mehrdad Mahdavi · Defu Lian · Chelsea Finn -
2020 Poster: Semi-Supervised Neural Architecture Search »
Renqian Luo · Xu Tan · Rui Wang · Tao Qin · Enhong Chen · Tie-Yan Liu -
2020 Poster: Incorporating BERT into Parallel Sequence Decoding with Adapters »
Junliang Guo · Zhirui Zhang · Linli Xu · Hao-Ran Wei · Boxing Chen · Enhong Chen -
2020 Poster: Sampling-Decomposable Generative Adversarial Recommender »
Binbin Jin · Defu Lian · Zheng Liu · Qi Liu · Jianhui Ma · Xing Xie · Enhong Chen -
2019 Poster: Efficient Pure Exploration in Adaptive Round Model »
Tianyuan Jin · Jieming SHI · Xiaokui Xiao · Enhong Chen -
2018 Poster: Neural Architecture Optimization »
Renqian Luo · Fei Tian · Tao Qin · Enhong Chen · Tie-Yan Liu -
2012 Poster: Image Denoising and Inpainting with Deep Neural Networks »
Junyuan Xie · Linli Xu · Enhong Chen