Timezone: »

Regularized Learning with Networks of Features
Ted Sandler · John Blitzer · Partha P Talukdar · Lyle Ungar

Mon Dec 08 08:45 PM -- 12:00 AM (PST) @

For many supervised learning problems, we possess prior knowledge about which features yield similar information about the target variable. In predicting the topic of a document, we might know that two words are synonyms, or when performing image recognition, we know which pixels are adjacent. Such synonymous or neighboring features are near-duplicates and should therefore be expected to have similar weights in a good model. Here we present a framework for regularized learning in settings where one has prior knowledge about which features are expected to have similar and dissimilar weights. This prior knowledge is encoded as a graph whose vertices represent features and whose edges represent similarities and dissimilarities between them. During learning, each feature's weight is penalized by the amount it differs from the average weight of its neighbors. For text classification, regularization using graphs of word co-occurrences outperforms manifold learning and compares favorably to other recently proposed semi-supervised learning methods. For sentiment analysis, feature graphs constructed from declarative human knowledge, as well as from auxiliary task learning, significantly improve prediction accuracy.

Author Information

Ted Sandler (University of Pennsylvania)
John Blitzer (Google)
Partha P Talukdar (University of Pennsylvania)
Lyle Ungar (University of Pennsylvania)

More from the Same Authors