Timezone: »

Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
Yifan Zhang · Bryan Hooi · Lanqing Hong · Jiashi Feng

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #634

Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. Source code is available in the supplementary material.

Author Information

Yifan Zhang (National University of Singapore)
Bryan Hooi (National University of Singapore)
Lanqing Hong (Huawei Noah's Ark Lab)
Jiashi Feng (UC Berkeley)

More from the Same Authors