Skip to yearly menu bar Skip to main content


Poster

A Primal Dual Formulation For Deep Learning With Constraints

Yatin Nandwani · Abhishek Pathak · Mausam · Parag Singla

East Exhibition Hall B, C #204

Keywords: [ Deep Learning ] [ Deep Learning -> Efficient Training Methods; Deep Learning ] [ Optimization for Deep Networks ]


Abstract:

For several problems of interest, there are natural constraints which exist over the output label space. For example, for the joint task of NER and POS labeling, these constraints might specify that the NER label ‘organization’ is consistent only with the POS labels ‘noun’ and ‘preposition’. These constraints can be a great way of injecting prior knowledge into a deep learning model, thereby improving overall performance. In this paper, we present a constrained optimization formulation for training a deep network with a given set of hard constraints on output labels. Our novel approach first converts the label constraints into soft logic constraints over probability distributions outputted by the network. It then converts the constrained optimization problem into an alternating min-max optimization with Lagrangian variables defined for each constraint. Since the constraints are independent of the target labels, our framework easily generalizes to semi-supervised setting. We experiment on the tasks of Semantic Role Labeling (SRL), Named Entity Recognition (NER) tagging, and fine-grained entity typing and show that our constraints not only significantly reduce the number of constraint violations, but can also result in state-of-the-art performance

Live content is unavailable. Log in and register to view live content