Timezone: »
Modern deep neural networks must demonstrate state-of-the-art accuracy while exhibiting low latency and energy consumption. As such, neural architecture search (NAS) algorithms take these two constraints into account when generating a new architecture. However, efficiency metrics such as latency are typically hardware dependent requiring the NAS algorithm to either measure or predict the architecture latency.Measuring the latency of every evaluated architecture adds a significant amount of time to the NAS process. Here we propose Microprocessor A Priori for Latency Estimation MAPLE that does not rely on transfer learning or domain adaptation but instead generalizes to new hardware by incorporating a prior hardware characteristics during training. MAPLE takes advantage of a novel quantitative strategy to characterize the underlying microprocessor by measuring relevant hardware performance metrics, yielding a fine-grained and expressive hardware descriptor. Moreover, the proposed MAPLE benefits from the tightly coupled I/O between the CPU and GPU and their dependency to predict DNN latency on GPUs while measuring microprocessor performance hardware counters from the CPU feeding the GPU hardware. Through this quantitative strategy as the hardware descriptor, MAPLE can generalize to new hardware via a few shot adaptation strategy where with as few as 3 samples it exhibits a 3% improvement over state-of-the-art methods requiring as much as 10 samples. Experimental results showed that, increasing the few shot adaptation samples to 10 improves the accuracy significantly over the state-of-the-art methods by 12%. Furthermore, it was demonstrated that MAPLE exhibiting 8-10% better accuracy, on average, compared to relevant baselines at any number of adaptation samples. The proposed method provides a versatile and practical latency prediction methodology inferring DNN run-time on multiple hardware devices while not imposing any significant overhead for sample collection.
Author Information
Saad Abbasi (University of Waterloo)
Alexander Wong (University of Waterloo)
Mohammad Javad Shafiee (University of Waterloo)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : MAPLE: Microprocessor A Priori for Latency Estimation »
Dates n/a. Room
More from the Same Authors
-
2021 : Graph Convolutional Networks for Multi-modality Movie Scene Segmentation »
Yaoxin Li · Alexander Wong · Mohammad Javad Shafiee -
2022 : Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers »
Alexander Wong · Mohammad Javad Shafiee · Saad Abbasi · Saeejith Nair · Mahmoud Famouri -
2022 : COVID-Net Biochem: An Explainability-driven Framework to Building Machine Learning Models for Predicting Survival and Kidney Injury of COVID-19 Patients from Clinical and Biochemistry Data »
Hossein Aboutalebi · Maya Pavlova · Mohammad Javad Shafiee · Adrian Florea · Andrew Hryniowski · Alexander Wong -
2022 : Detecting COVID-19 infection from ultrasound imaging with only five shots: A high-performing explainable deep few-shot learning network »
Jessy Song · Ashkan Ebadi · Adrian Florea · PENGCHENG XI · Alexander Wong -
2022 : Breast Cancer Pathologic Complete Response Prediction using Volumetric Deep Radiomic Features from Synthetic Correlated Diffusion Imaging »
Chi-en Tai · Nedim Hodzic · Nic Flanagan · Hayden Gunraj · Alexander Wong -
2021 : Live Q&A session: MAPLE: Microprocessor A Priori for Latency Estimation »
Saad Abbasi · Alexander Wong · Mohammad Javad Shafiee -
2018 : Poster presentations »
Simon Wiedemann · Huan Wang · Ivan Zhang · Chong Wang · Mohammad Javad Shafiee · Rachel Manzelli · Wenbing Huang · Tassilo Klein · Lifu Zhang · Ashutosh Adhikari · Faisal Qureshi · Giuseppe Castiglione