TY - JOUR
T1 - Augur
T2 - Modeling the Resource Requirements of ConvNets on Mobile Devices
AU - Lu, Zongqing
AU - Rallapalli, Swati
AU - Chan, Kevin
AU - Pu, Shiliang
AU - Porta, Thomas La
N1 - Funding Information:
This work was supported in part by the Army Research Laboratory under Cooperative Agreement Number W911NF-09-2-0053, NSFC under grant 61872009, and Hikvison. An early version of this work appeared in the Proceedings of ACM Multimedia 2017 [29].
Publisher Copyright:
© 2002-2012 IEEE.
PY - 2021/2/1
Y1 - 2021/2/1
N2 - Convolutional Neural Networks (ConvNets/CNNs) have revolutionized the research in computer vision, due to their ability to capture complex patterns, resulting in high inference accuracies. However, the increasingly complex nature of these neural networks means that they are particularly suited for server computers with powerful GPUs. We envision that deep learning applications will be eventually widely deployed on mobile devices, e.g., smartphones, self-driving cars, and drones. Therefore, in this paper, we aim to understand the resource requirements of CNNs on mobile devices in terms of compute time, memory, and power. First, by deploying several popular CNNs on different mobile CPUs and GPUs, we measure and analyze the performance and resource usage for the CNNs on a layerwise granularity. Our findings point out the potential ways of optimizing the CNN pipelines on mobile devices. Second, we model resource requirements of core computations of CNNs. Finally, based on the measurement and modeling, we build and evaluate our modeling tool, Augur, which takes a CNN configuration (descriptor) as the input and estimates the compute time, memory, and power requirements of the CNN, to give insights about whether and how efficiently a CNN can be run on a given mobile platform.
AB - Convolutional Neural Networks (ConvNets/CNNs) have revolutionized the research in computer vision, due to their ability to capture complex patterns, resulting in high inference accuracies. However, the increasingly complex nature of these neural networks means that they are particularly suited for server computers with powerful GPUs. We envision that deep learning applications will be eventually widely deployed on mobile devices, e.g., smartphones, self-driving cars, and drones. Therefore, in this paper, we aim to understand the resource requirements of CNNs on mobile devices in terms of compute time, memory, and power. First, by deploying several popular CNNs on different mobile CPUs and GPUs, we measure and analyze the performance and resource usage for the CNNs on a layerwise granularity. Our findings point out the potential ways of optimizing the CNN pipelines on mobile devices. Second, we model resource requirements of core computations of CNNs. Finally, based on the measurement and modeling, we build and evaluate our modeling tool, Augur, which takes a CNN configuration (descriptor) as the input and estimates the compute time, memory, and power requirements of the CNN, to give insights about whether and how efficiently a CNN can be run on a given mobile platform.
UR - http://www.scopus.com/inward/record.url?scp=85099574373&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099574373&partnerID=8YFLogxK
U2 - 10.1109/TMC.2019.2946538
DO - 10.1109/TMC.2019.2946538
M3 - Article
AN - SCOPUS:85099574373
SN - 1536-1233
VL - 20
SP - 352
EP - 365
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 2
M1 - 8863962
ER -