Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

Semi-supervised Learning Using Robust Loss

Wenhui Cui · Haleh Akrami · · Richard Leahy


Abstract:

The amount of manually labeled data is limited in medical applications, so semi-supervised learning and automatic labeling strategies can be an asset for training deep neural networks. However, the quality of the automatically generated labels can be uneven and inferior to manual labels. This paper suggests a semi-supervised training strategy for leveraging both manually labeled data and extra unlabeled data. In contrast to the existing approaches, we apply a robust loss for the automated labeled data to compensate for the uneven data quality automatically. First, we generate pseudo-labels for unlabeled data using a model pre-trained on labeled data. These pseudo-labels are noisy, and using them along with labeled data for training can severely degrade learned feature representations and the generalization of the model. Here we mitigate the effect of these pseudo-labels by using a robust loss, Beta Cross-Entropy. We show that our proposed strategy improves the model performance by penalizing the labels with a lower likelihood in a segmentation application.

Chat is not available.