Timezone: »
Latent factor models are canonical tools to learn low-dimensional and linear embedding of original data. Traditional latent factor models are based on low-rank matrix factorization of covariance matrices. However, for higher-order data with multiple modes, i.e., tensors, this simple treatment fails to take into account the mode-specific relations. This ignorance leads to inefficiency in analysis of complex structures as well as poor data compression ability. In this paper, unlike covariance matrices, we investigate high-order covariance tensor directly by exploiting tensor ring (TR) format and propose the Bayesian TR latent factor model, which can represent complex multi-linear correlations and achieves efficient data compression. To overcome the difficulty of finding the optimal TR-ranks and simultaneously imposing sparsity on loading coefficients, a multiplicative Gamma process (MGP) prior is adopted to automatically infer the ranks and obtain sparsity. Then, we establish efficient parameter-expanded EM algorithm to learn the maximum a posteriori (MAP) estimate of model parameters.
Author Information
Zerui Tao (Tokyo University of Agriculture and Technology)
Xuyang ZHAO (Tokyo University of Agriculture and Technology)
Toshihisa Tanaka (Tokyo University of Agriculture and Technology)
Qibin Zhao (RIKEN AIP)
More from the Same Authors
-
2021 : Bayesian Tensor Networks »
Kriton Konstantinidis · Yao Lei Xu · Qibin Zhao · Danilo Mandic -
2021 : Is Rank Minimization of the Essence to Learn Tensor Network Structure? »
Chao Li · Qibin Zhao -
2021 : Fully-Connected Tensor Network Decomposition »
Yu-Bang Zheng · Ting-Zhu Huang · Xi-Le Zhao · Qibin Zhao · Tai-Xiang Jiang -
2023 Poster: Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks »
Andong Wang · Chao Li · Mingyuan Bai · Zhong Jin · Guoxu Zhou · Qibin Zhao -
2023 Poster: Undirected Probabilistic Model for Tensor Decomposition »
Zerui Tao · Toshihisa Tanaka · Qibin Zhao -
2022 Poster: SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEG »
Reinmar Kobler · Jun-ichiro Hirayama · Qibin Zhao · Motoaki Kawanabe -
2021 : Discussion Pannel »
Xiao-Yang Liu · Qibin Zhao · Chao Li · Guillaume Rabusseau -
2021 : Bayesian Tensor Networks »
Kriton Konstantinidis · Yao Lei Xu · Qibin Zhao · Danilo Mandic -
2021 Workshop: Second Workshop on Quantum Tensor Networks in Machine Learning »
Xiao-Yang Liu · Qibin Zhao · Ivan Oseledets · Yufei Ding · Guillaume Rabusseau · Jean Kossaifi · Khadijeh Najafi · Anwar Walid · Andrzej Cichocki · Masashi Sugiyama -
2020 Workshop: First Workshop on Quantum Tensor Networks in Machine Learning »
Xiao-Yang Liu · Qibin Zhao · Jacob Biamonte · Cesar F Caiafa · Paul Pu Liang · Nadav Cohen · Stefan Leichenauer -
2019 Poster: Deep Multimodal Multilinear Fusion with High-order Polynomial Pooling »
Ming Hou · Jiajia Tang · Jianhai Zhang · Wanzeng Kong · Qibin Zhao -
2011 Poster: A Multilinear Subspace Regression Method Using Orthogonal Tensors Decompositions »
Qibin Zhao · Cesar F Caiafa · Danilo Mandic · Liqing Zhang · Tonio Ball · Andreas Schulze-bonhage · Andrzej S CICHOCKI -
2011 Spotlight: A Multilinear Subspace Regression Method Using Orthogonal Tensors Decompositions »
Qibin Zhao · Cesar F Caiafa · Danilo Mandic · Liqing Zhang · Tonio Ball · Andreas Schulze-bonhage · Andrzej S CICHOCKI