Skip to yearly menu bar Skip to main content


Workshop

Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training

Frank Schneider · Zachary Nado · Philipp Hennig · George Dahl · Naman Agarwal

Theater B

Fri 2 Dec, 6:30 a.m. PST

Workshop Description

Training contemporary neural networks is a lengthy and often costly process, both in human designer time and compute resources. Although the field has invented numerous approaches, neural network training still usually involves an inconvenient amount of “babysitting” to get the model to train properly. This not only requires enormous compute resources but also makes deep learning less accessible to outsiders and newcomers. This workshop will be centered around the question “How can we train neural networks faster” by focusing on the effects algorithms (not hardware or software developments) have on the training time of neural networks. These algorithmic improvements can come in the form of novel methods, e.g. new optimizers or more efficient data selection strategies, or through empirical experience, e.g. best practices for quickly identifying well-working hyperparameter settings or informative metrics to monitor during training.

We all think we know how to train deep neural networks, but we all seem to have different ideas. Ask any deep learning practitioner about the best practices of neural network training, and you will often hear a collection of arcane recipes. Frustratingly, these hacks vary wildly between companies and teams. This workshop offers a platform to talk about these ideas, agree on what is actually known, and what is just noise. In this sense, this will not be an “optimization workshop” in the mathematical sense (of which there have been several in the past, of course).

To this end, the workshop’s goal is to connect two communities: Researchers who develop new algorithms for faster neural network training, such as new optimization methods or deep learning architectures. Practitioners who, through their work on real-world problems, are increasingly relying on “tricks of the trade”. The workshop aims to close the gap between research and applications, identifying the most relevant current issues that hinder faster neural network training in practice.

Topics

Among the topics addressed by the workshop are:

- What “best practices” for faster neural network training are used in practice and can we learn from them to build better algorithms?
- What are painful lessons learned while training deep learning models?
- What are the most needed algorithmic improvements for neural network training?
- How can we ensure that research on training methods for deep learning has practical relevance?

Important Dates

- Submission Deadline: September 30, 2022, 07:00am UTC (updated!)
- Accept/Reject Notification Date: October 20, 2022, 07:00am UTC (updated!)
- Workshop Date: December 2, 2022

Chat is not available.
Timezone: America/Los_Angeles

Schedule