AutoMS: Automatic Model Selection for Novelty Detection with Error Rate Control

Yifan Zhang · Haiyan Jiang · Haojie Ren · Changliang Zou · Dejing Dou

Hall J #235

Keywords: [ Novelty Detection ] [ FDR ] [ model selection ]

[ Abstract ]
[ Paper [ Slides [ Poster [ OpenReview
Tue 29 Nov 2 p.m. PST — 4 p.m. PST


Given an unsupervised novelty detection task on a new dataset, how can we automatically select a ''best'' detection model while simultaneously controlling the error rate of the best model? For novelty detection analysis, numerous detectors have been proposed to detect outliers on a new unseen dataset based on a score function trained on available clean data. However, due to the absence of labeled data for model evaluation and comparison, there is a lack of systematic approaches that are able to select a ''best'' model/detector (i.e., the algorithm as well as its hyperparameters) and achieve certain error rate control simultaneously. In this paper, we introduce a unified data-driven procedure to address this issue. The key idea is to maximize the number of detected outliers while controlling the false discovery rate (FDR) with the help of Jackknife prediction. We establish non-asymptotic bounds for the false discovery proportions and show that the proposed procedure yields valid FDR control under some mild conditions. Numerical experiments on both synthetic and real data validate the theoretical results and demonstrate the effectiveness of our proposed AutoMS method. The code is available at

Chat is not available.