Skip to yearly menu bar Skip to main content


Poster

Collaborative Refining for Learning from Inaccurate Labels

BIN HAN · Yi-Xuan Sun · Ya-Lin Zhang · Libang Zhang · Haoran Hu · Longfei Li · Jun Zhou · Guo Ye · HUIMEI HE

East Exhibit Hall A-C #3509
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

This paper considers the problem of learning from multiple sets of inaccurate labels, which can be easily obtained from low-cost annotators, such as rule-based annotators. Previous works typically concentrate on aggregating information from all the annotators, overlooking the significance of data refinement. This paper presents a collaborative refining approach for learning from inaccurate labels. To refine the data, we introduce the annotator agreement as an instrument, which refers to whether multiple annotators agree or disagree on the labels for a given sample. For samples where some annotators disagree, a comparative strategy is proposed to filter noise. Through theoretical analysis, the connections among multiple sets of labels, the respective models trained on them, and the true labels are uncovered to identify relatively reliable labels. For samples where all annotators agree, an aggregating strategy is designed to mitigate potential noise. Guided by theoretical bounds on loss values, a sample selection criterion is introduced and modified to be more robust against potentially problematic values. Through these two methods, all the samples are refined during training, and these refined samples are used to train a lightweight model simultaneously. Extensive experiments are conducted on benchmark and real-world datasets to demonstrate the superiority of our methods.

Live content is unavailable. Log in and register to view live content