Skip to yearly menu bar Skip to main content


Poster

Statistical Analysis of Nearest Neighbor Methods for Anomaly Detection

Xiaoyi Gu · Leman Akoglu · Alessandro Rinaldo

East Exhibition Hall B, C #237

Keywords: [ Learning Theory ] [ Theory ] [ Algorithms ] [ Unsupervised Learning ]


Abstract:

Nearest-neighbor (NN) procedures are well studied and widely used in both supervised and unsupervised learning problems. In this paper we are concerned with investigating the performance of NN-based methods for anomaly detection. We first show through extensive simulations that NN methods compare favorably to some of the other state-of-the-art algorithms for anomaly detection based on a set of benchmark synthetic datasets. We further consider the performance of NN methods on real datasets, and relate it to the dimensionality of the problem. Next, we analyze the theoretical properties of NN-methods for anomaly detection by studying a more general quantity called distance-to-measure (DTM), originally developed in the literature on robust geometric and topological inference. We provide finite-sample uniform guarantees for the empirical DTM and use them to derive misclassification rates for anomalous observations under various settings. In our analysis we rely on Huber's contamination model and formulate mild geometric regularity assumptions on the underlying distribution of the data.

Live content is unavailable. Log in and register to view live content