Timezone: »
We propose a method to improve the anomaly detection performance on target domains by transferring knowledge on related domains. Although anomaly labels are valuable to learn anomaly detectors, they are difficult to obtain due to their rarity. To alleviate this problem, existing methods use anomalous and normal instances in the related domains as well as target normal instances. These methods require training on each target domain. However, this requirement can be problematic in some situations due to the high computational cost of training. The proposed method can infer the anomaly detectors for target domains without re-training by introducing the concept of latent domain vectors, which are latent representations of the domains and are used for inferring the anomaly detectors. The latent domain vector for each domain is inferred from the set of normal instances in the domain. The anomaly score function for each domain is modeled on the basis of autoencoders, and its domain-specific property is controlled by the latent domain vector. The anomaly score function for each domain is trained so that the scores of normal instances become low and the scores of anomalies become higher than those of the normal instances, while considering the uncertainty of the latent domain vectors. When target normal instances can be used during training, the proposed method can also use them for training in a unified framework. The effectiveness of the proposed method is demonstrated through experiments using one synthetic and four real-world datasets. Especially, the proposed method without re-training outperforms existing methods with target specific training.
Author Information
Atsutoshi Kumagai (NTT)
Tomoharu Iwata (NTT)
Yasuhiro Fujiwara (NTT Communication Science Laboratories)
More from the Same Authors
-
2023 Poster: Regularizing Neural Networks with Meta-Learning Generative Models »
Shin'ya Yamaguchi · Daiki Chijiwa · Sekitoshi Kanai · Atsutoshi Kumagai · Hisashi Kashima -
2022 Poster: Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data »
Yusuke Tanaka · Tomoharu Iwata · naonori ueda -
2022 Poster: Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion »
Atsutoshi Kumagai · Tomoharu Iwata · Yasutoshi Ida · Yasuhiro Fujiwara -
2022 Poster: Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks »
Daiki Chijiwa · Shin'ya Yamaguchi · Atsutoshi Kumagai · Yasutoshi Ida -
2022 Poster: Sharing Knowledge for Meta-learning with Feature Descriptions »
Tomoharu Iwata · Atsutoshi Kumagai -
2021 Poster: Meta-Learning for Relative Density-Ratio Estimation »
Atsutoshi Kumagai · Tomoharu Iwata · Yasuhiro Fujiwara -
2021 Poster: Loss function based second-order Jensen inequality and its application to particle variational inference »
Futoshi Futami · Tomoharu Iwata · naonori ueda · Issei Sato · Masashi Sugiyama -
2019 Poster: Fast Sparse Group Lasso »
Yasutoshi Ida · Yasuhiro Fujiwara · Hisashi Kashima -
2019 Poster: Spatially Aggregated Gaussian Processes with Multivariate Areal Outputs »
Yusuke Tanaka · Toshiyuki Tanaka · Tomoharu Iwata · Takeshi Kurashima · Maya Okawa · Yasunori Akagi · Hiroyuki Toda -
2016 Poster: Multi-view Anomaly Detection via Robust Probabilistic Latent Variable Models »
Tomoharu Iwata · Makoto Yamada -
2015 Poster: Cross-Domain Matching for Bag-of-Words Data via Kernel Embeddings of Latent Distributions »
Yuya Yoshikawa · Tomoharu Iwata · Hiroshi Sawada · Takeshi Yamada -
2014 Poster: Latent Support Measure Machines for Bag-of-Words Data Classification »
Yuya Yoshikawa · Tomoharu Iwata · Hiroshi Sawada