Timezone: »
The ability to detect anomaly has long been recognized as an inherent human ability, yet to date, practical AI solutions to mimic such capability have been lacking. This lack of progress can be attributed to several factors. To begin with, the distribution of ``abnormalities'' is intractable. Anything outside of a given normal population is by definition an anomaly. This explains why a large volume of work in this area has been dedicated to modeling the normal distribution of a given task followed by detecting deviations from it. This direction is however unsatisfying as it would require modeling the normal distribution of every task that comes along, which includes tedious data collection. In this paper, we report our work aiming to handle these issues. To deal with the intractability of abnormal distribution, we leverage Energy Based Model (EBM). EBMs learn to associates low energies to correct values and higher energies to incorrect values. At its core, the EBM employs Langevin Dynamics (LD) in generating these incorrect samples based on an iterative optimization procedure, alleviating the intractable problem of modeling the world of anomalies. Then, in order to avoid training an anomaly detector for every task, we utilize an adaptive sparse coding layer. Our intention is to design a plug and play feature that can be used to quickly update what is normal during inference time. Lastly, to avoid tedious data collection, this mentioned update of the sparse coding layer needs to be achievable with just a few shots. Here, we employ a meta learning scheme that simulates such a few shot setting during training. We support our findings with strong empirical evidence.
Author Information
Ze Wang (Purdue University)
Yipin Zhou (Facebook)
Rui Wang (Facebook)
Tsung-Yu Lin (Department of Computer Science, University of Massachusetts, Amherst)
Ashish Shah (Booz Allen Hamilton)
Ser Nam Lim (Facebook AI)
More from the Same Authors
-
2021 : Mix-MaxEnt: Improving Accuracy and Uncertainty Estimates of Deterministic Neural Networks »
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania -
2022 Poster: Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness »
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania -
2022 Poster: Spartan: Differentiable Sparsity via Regularized Transportation »
Kai Sheng Tai · Taipeng Tian · Ser Nam Lim -
2022 Poster: FedSR: A Simple and Effective Domain Generalization Method for Federated Learning »
A. Tuan Nguyen · Philip Torr · Ser Nam Lim -
2022 Poster: GAPX: Generalized Autoregressive Paraphrase-Identification X »
Yifei Zhou · Renyu Li · Hayden Housen · Ser Nam Lim -
2022 Poster: HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions »
Yongming Rao · Wenliang Zhao · Yansong Tang · Jie Zhou · Ser Nam Lim · Jiwen Lu -
2021 Poster: Learning to Ground Multi-Agent Communication with Autoencoders »
Toru Lin · Jacob Huh · Christopher Stauffer · Ser Nam Lim · Phillip Isola -
2021 Poster: Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods »
Derek Lim · Felix Hohne · Xiuyu Li · Sijia Linda Huang · Vaishnavi Gupta · Omkar Bhalerao · Ser Nam Lim -
2021 Poster: NeRV: Neural Representations for Videos »
Hao Chen · Bo He · Hanyu Wang · Yixuan Ren · Ser Nam Lim · Abhinav Shrivastava -
2021 Poster: Equivariant Manifold Flows »
Isay Katsman · Aaron Lou · Derek Lim · Qingxuan Jiang · Ser Nam Lim · Christopher De Sa -
2021 Poster: A Continuous Mapping For Augmentation Design »
Keyu Tian · Chen Lin · Ser Nam Lim · Wanli Ouyang · Puneet Dokania · Philip Torr -
2020 Poster: Better Set Representations For Relational Reasoning »
Qian Huang · Horace He · Abhay Singh · Yan Zhang · Ser Nam Lim · Austin Benson -
2020 Poster: Neural Manifold Ordinary Differential Equations »
Aaron Lou · Derek Lim · Isay Katsman · Leo Huang · Qingxuan Jiang · Ser Nam Lim · Christopher De Sa