Timezone: »

Direct Loss Minimization for Structured Prediction
David A McAllester · Tamir Hazan · Joseph Keshet

Mon Dec 06 12:00 AM -- 12:00 AM (PST) @

In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss. In binary classification one typically tries to minimizes the error rate. But in structured prediction each task often has its own measure of performance such as the BLEU score in machine translation or the intersection-over-union score in PASCAL segmentation. The most common approaches to structured prediction, structural SVMs and CRFs, do not minimize the task loss: the former minimizes a surrogate loss with no guarantees for task loss and the latter minimizes log loss independent of task loss. The main contribution of this paper is a theorem stating that a certain perceptron-like learning rule, involving features vectors derived from loss-adjusted inference, directly corresponds to the gradient of task loss. We give empirical results on phonetic alignment of a standard test set from the TIMIT corpus, which surpasses all previously reported results on this problem.

Author Information

David A McAllester (TTI-Chicago)
Tamir Hazan (Technion)
Joseph Keshet (Bar-Ilan University)

More from the Same Authors