Efficient Computer Vision on Edge Devices with Pipeline-Parallel Hierarchical Neural Networks
Document Type
Conference Proceeding
Publication Date
1-2022
Publication Title
27th Asia and South Pacific Design Automation Conference (ASP-DAC) 2022
Abstract
Computer vision on low-power edge devices enables applications including search-and-rescue and security. State-of-the-art computer vision algorithms, such as Deep Neural Networks (DNNs), are too large for inference on low-power edge devices. To improve efficiency, some existing approaches parallelize DNN inference across multiple edge devices. However, these techniques introduce significant communication and synchronization overheads or are unable to balance workloads across devices. This paper demonstrates that the hierarchical DNN architecture is well suited for parallel processing on multiple edge devices. We design a novel method that creates a parallel inference pipeline for computer vision problems that use hierarchical DNNs. The method balances loads across the collaborating devices and reduces communication costs to facilitate the processing of multiple video frames simultaneously with higher throughput. Our experiments consider a representative computer vision problem where image recognition is performed on each video frame, running on multiple Raspberry Pi 4Bs. With four collaborating low-power edge devices, our approach achieves 3.21X higher throughput, 68% less energy consumption per device per frame, and 58% decrease in memory when compared with existing single-device hierarchical DNNs.
Identifier
https://arxiv.org/abs/2109.13356
Recommended Citation
Abhinav Goel, Caleb Tung, Xiao Hu, George K. Thiruvathukal, James C. Davis, and Yung-Hsiang Lu, "Efficient Computer Vision on Edge Devices with Pipeline-Parallel Hierarchical Neural Networks", Proceedings of 27th Asia and South Pacific Design Automation Conference (ASP-DAC) 2022.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.