Skip to yearly menu bar Skip to main content


Keynote
in
Workshop: NIPS Highlights (MLTrain), Learn How to code a paper with state of the art frameworks

Simple and Efficient Implementation of Neural Nets with Automatic Operation Batching

Graham Neubig


Abstract:

In this talk I will talk about how to easily and efficiently develop neural network models for complicated problems such as natural language processing using dynamic neural networks. First, I will briefly explain different paradigms in neural networks: static networks (e.g. TensorFlow), dynamic and eager (e.g. PyTorch), and dynamic and lazy (e.g. DyNet). I will discuss about how to efficiently implement models within dynamic neural networks, including minimizing the number of computations and mini-batching. Then I'll introduce our recently proposed method for automatic batching in dynamic networks, which makes it much easier to implement complicated networks efficiently. Code examples for the implementation will be provided.

Live content is unavailable. Log in and register to view live content