Timezone: »

 
Tutorial
Large-Scale Distributed Systems for Training Neural Networks
Jeff Dean · Oriol Vinyals

Mon Dec 07 06:30 AM -- 08:30 AM (PST) @ Level 2 room 210 E,F

Over the past few years, we have built large-scale computer systems for training neural networks, and then applied these systems to a wide variety of problems that have traditionally been very difficult for computers. We have made significant improvements in the state-of-the-art in many of these areas, and our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks. In this talk,we'll highlight some of the distributed systems and algorithms that we use in order to train large models quickly, and demonstrate TensorFlow (tensorflow.org), an open-source software system we have put together that makes it easy to conduct research in large-scale machine learning.

Author Information

Jeff Dean (Google Brain Team)

Jeff joined Google in 1999 and is currently a Google Senior Fellow. He currently leads Google's Research and Health divisions, where he co-founded the Google Brain team. He has co-designed/implemented multiple generations of Google's distributed machine learning systems for neural network training and inference, as well as multiple generations of Google's crawling, indexing, and query serving systems, and major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on compiler techniques for object-oriented languages. He is a Fellow of the ACM, a Fellow of the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM Prize in Computing.

Oriol Vinyals (Google)

Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.

More from the Same Authors