Timezone: »

Randomized Pruning: Efficiently Calculating Expectations in Large Dynamic Programs
Alexandre Bouchard-Côté · Slav Petrov · Dan Klein

Tue Dec 08 05:24 PM -- 05:25 PM (PST) @

Pruning can massively accelerate the computation of feature expectations in large models. However, any single pruning mask will introduce bias. We present a novel approach which employs a randomized sequence of pruning masks. Formally, we apply auxiliary variable MCMC sampling to generate this sequence of masks, thereby gaining theoretical guarantees about convergence. Because each mask is generally able to skip large portions of an underlying dynamic program, our approach is particularly compelling for high-degree algorithms. Empirically, we demonstrate our method on bilingual parsing, showing decreasing bias as more masks are incorporated, and outperforming fixed tic-tac-toe pruning.

Author Information

Alexandre Bouchard-Côté (UBC)
Slav Petrov (Google Research - Language)
Dan Klein (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors