Skip to yearly menu bar Skip to main content


Poster

AnoShift: A Distribution Shift Benchmark for Unsupervised Anomaly Detection

Marius Dragoi · Elena Burceanu · Emanuela Haller · Andrei Manolache · Florin Brad

Hall J (level 1) #1028

Keywords: [ network traffic ] [ network intrusion detection ] [ anomaly ] [ non-stationary ] [ unsupervised ] [ Distribution Shift ] [ natural distribution shifts ]


Abstract:

Analyzing the distribution shift of data is a growing research direction in nowadays Machine Learning (ML), leading to emerging new benchmarks that focus on providing a suitable scenario for studying the generalization properties of ML models. The existing benchmarks are focused on supervised learning, and to the best of our knowledge, there is none for unsupervised learning. Therefore, we introduce an unsupervised anomaly detection benchmark with data that shifts over time, built over Kyoto-2006+, a traffic dataset for network intrusion detection. This type of data meets the premise of shifting the input distribution: it covers a large time span (10 years), with naturally occurring changes over time (e.g. users modifying their behavior patterns, and software updates). We first highlight the non-stationary nature of the data, using a basic per-feature analysis, t-SNE, and an Optimal Transport approach for measuring the overall distribution distances between years. Next, we propose AnoShift, a protocol splitting the data in IID, NEAR, and FAR testing splits. We validate the performance degradation over time with diverse models, ranging from classical approaches to deep learning. Finally, we show that by acknowledging the distribution shift problem and properly addressing it, the performance can be improved compared to the classical training which assumes independent and identically distributed data (on average, by up to 3% for our approach). Dataset and code are available at https://github.com/bit-ml/AnoShift/.

Chat is not available.