Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

A Unified DRO View of Multi-class Loss Functions with top-N Consistency

Dixian Zhu · Tianbao Yang


Abstract: Multi-class classification is one of the most common tasks in machine learning applications, where data is labeled by one of many class labels. Many loss functions have been proposed for multi-class classification including two well-known ones, namely the cross-entropy (CE) loss and the crammer-singer (CS) loss (aka. the SVM loss). While CS loss has been used widely for traditional machine learning tasks for structured data, CE loss is usually a better choice (the default choice) for multi-class deep learning tasks. There are also top-$k$ variants of CS loss and CE loss that are proposed to promote the learning of a classifier for achieving better top-$k$ accuracy. Nevertheless, it still remains unclear the relationship between these different losses, which hinders our understanding of their expectations in different scenarios. In this paper, we present a unified view of the CS/CE losses and their smoothed top-$k$ variants by proposing a new family of loss functions, which are arguably better than the CS/CE losses when the given label information is incomplete and noisy. The new family of smooth loss functions named {label-distributionally robust (LDR) loss} is defined by leveraging the distributionally robust optimization (DRO) framework to model the uncertainty in the given label information, where the uncertainty over true class labels is captured by using distributional weights for each label regularized by a function. We have two observations: (i) the CS and the CE loss are just two special cases of the LDR loss by choosing two particular values for the involved regularization parameter; hence the new LDR loss provides an interpolation between the CS loss and the CE loss, and also induces new variants; (ii) the smoothed top-$k$ losses are also special cases of the LDR loss by regularizing the involved uncertainty variables into a bounded ball. Theoretically, we establish the top-$N$ consistency (for any $N\geq 1$) of the proposed LDR loss, which is not only consistent with existing consistenty results for the CS and the CE loss but also addresses some open problems regarding the consistency of top-$k$ SVM losses. % However, in many real-world applications (e.g., natural image classification), data is often inherently multi-label, which renders the given information incomplete and noisy. Hence, overfitting to the given annotations by deep neural networks with high capacity could harm the generalization performance. To tackle this issue, this paper proposes a novel {\bf label-distributionally robust method} (named LDR), where the uncertainty over true class labels is captured by a regularized distributionally robust optimization framework. Interestingly, this LDR loss family include many existing loss functions as special/extreme cases, e.g., cross-entropy (CE) loss, crammer-singer (CS) loss, but can avoid the defects of CS loss and enjoy more flexibility than CE loss by varying the regularization strength on the distributional weight (DW) variables. Furthermore, we proposed an variant version for LDR that specializes in top-$k$ classification named LDR-$k$, for which we develop a novel efficient analytical solution. Of independent interest, we prove both LDR and LDR-$k$ loss family is calibrated and hence Fisher consistent for a broad family of DW regularization functions. Empirically, we provide some experimental results on synthetic data and real-world benchmark data to validate the effectiveness of the new variants of LDR loss.

Chat is not available.