Timezone: »
Training convolutional neural network models is memory intensive since back-propagation requires storing activations of all intermediate layers. This presents a practical concern when seeking to deploy very deep architectures in production, especially when models need to be frequently re-trained on updated datasets. In this paper, we propose a new implementation for back-propagation that significantly reduces memory usage, by enabling the use of approximations with negligible computational cost and minimal effect on training performance. The algorithm reuses common buffers to temporarily store full activations and compute the forward pass exactly. It also stores approximate per-layer copies of activations, at significant memory savings, that are used in the backward pass. Compared to simply approximating activations within standard back-propagation, our method limits accumulation of errors across layers. This allows the use of much lower-precision approximations without affecting training accuracy. Experiments on CIFAR-10, CIFAR-100, and ImageNet show that our method yields performance close to exact training, while storing activations compactly with as low as 4-bit precision.
Author Information
Ayan Chakrabarti (Washington University in St. Louis)
Benjamin Moseley (Carnegie Mellon University)
More from the Same Authors
-
2022 Poster: Algorithms with Prediction Portfolios »
Michael Dinitz · Sungjin Im · Thomas Lavastida · Benjamin Moseley · Sergei Vassilvitskii -
2021 : AI workloads inside databases »
Guy Van den Broeck · Alexander Ratner · Benjamin Moseley · Konstantinos Karanasos · Parisa Kordjamshidi · Molham Aref · Arun Kumar -
2021 Poster: Robust Online Correlation Clustering »
Silvio Lattanzi · Benjamin Moseley · Sergei Vassilvitskii · Yuyan Wang · Rudy Zhou -
2021 Oral: Faster Matchings via Learned Duals »
Michael Dinitz · Sungjin Im · Thomas Lavastida · Benjamin Moseley · Sergei Vassilvitskii -
2021 Poster: Faster Matchings via Learned Duals »
Michael Dinitz · Sungjin Im · Thomas Lavastida · Benjamin Moseley · Sergei Vassilvitskii -
2020 Poster: Fair Hierarchical Clustering »
Sara Ahmadian · Alessandro Epasto · Marina Knittel · Ravi Kumar · Mohammad Mahdian · Benjamin Moseley · Philip Pham · Sergei Vassilvitskii · Yuyan Wang -
2019 Poster: Training Image Estimators without Image Ground Truth »
Zhihao Xia · Ayan Chakrabarti -
2019 Spotlight: Training Image Estimators without Image Ground Truth »
Zhihao Xia · Ayan Chakrabarti -
2019 Poster: Cost Effective Active Search »
Shali Jiang · Roman Garnett · Benjamin Moseley -
2018 Poster: Efficient nonmyopic batch active search »
Shali Jiang · Gustavo Malkomes · Matthew Abbott · Benjamin Moseley · Roman Garnett -
2018 Spotlight: Efficient nonmyopic batch active search »
Shali Jiang · Gustavo Malkomes · Matthew Abbott · Benjamin Moseley · Roman Garnett -
2017 Poster: Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search »
Benjamin Moseley · Joshua Wang -
2017 Oral: Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search »
Benjamin Moseley · Joshua Wang