Skip to yearly menu bar Skip to main content


Poster

Adversarial Multiclass Classification: A Risk Minimization Perspective

Rizal Fathony · Anqi Liu · Kaiser Asif · Brian Ziebart

Area 5+6+7+8 #173

Keywords: [ Kernel Methods ] [ (Other) Classification ] [ Regularization and Large Margin Methods ]


Abstract:

Recently proposed adversarial classification methods have shown promising results for cost sensitive and multivariate losses. In contrast with empirical risk minimization (ERM) methods, which use convex surrogate losses to approximate the desired non-convex target loss function, adversarial methods minimize non-convex losses by treating the properties of the training data as being uncertain and worst case within a minimax game. Despite this difference in formulation, we recast adversarial classification under zero-one loss as an ERM method with a novel prescribed loss function. We demonstrate a number of theoretical and practical advantages over the very closely related hinge loss ERM methods. This establishes adversarial classification under the zero-one loss as a method that fills the long standing gap in multiclass hinge loss classification, simultaneously guaranteeing Fisher consistency and universal consistency, while also providing dual parameter sparsity and high accuracy predictions in practice.

Live content is unavailable. Log in and register to view live content