Poster
Dropout Training as Adaptive Regularization
Stefan Wager · Sida Wang · Percy Liang

Fri Dec 6th 07:00 -- 11:59 PM @ Harrah's Special Events Center, 2nd Floor #None
Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an $\LII$ regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learner, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.

Author Information

Stefan Wager (Stanford University)
Sida Wang (Facebook AI Research)
Percy Liang (Stanford University)

More from the Same Authors