Timezone: »

 
Poster
On-the-fly Operation Batching in Dynamic Computation Graphs
Graham Neubig · Yoav Goldberg · Chris Dyer

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #126 #None

Dynamic neural networks toolkits such as PyTorch, DyNet, and Chainer offer more flexibility for implementing models that cope with data of varying dimensions and structure, relative to toolkits that operate on statically declared computations (e.g., TensorFlow, CNTK, and Theano). However, existing toolkits - both static and dynamic - require that the developer organize the computations into the batches necessary for exploiting high-performance data-parallel algorithms and hardware. This batching task is generally difficult, but it becomes a major hurdle as architectures become complex. In this paper, we present an algorithm, and its implementation in the DyNet toolkit, for automatically batching operations. Developers simply write minibatch computations as aggregations of single instance computations, and the batching algorithm seamlessly executes them, on the fly, in computationally efficient batches. On a variety of tasks, we obtain throughput similar to manual batches, as well as comparable speedups over single-instance learning on architectures that are impractical to batch manually.

Author Information

Graham Neubig (Carnegie Mellon University)
Yoav Goldberg (Bar-Ilan University)
Chris Dyer (DeepMind)

More from the Same Authors