Document Type
Conference Proceeding
Publication Date
2-2025
Publication Title
Proceedings of the Winter Conference on Applications of Computer Vision (WACV) 2025
Pages
7153 - 7162
Publisher Name
IEEE
Abstract
This paper investigates how to efficiently deploy vision transformers on edge devices for small workloads. Recent methods reduce the latency of transformer neural networks by removing or merging tokens with small accuracy degradation. However these methods are not designed with edge device deployment in mind: they do not leverage information about the latency-workload trends to improve efficiency. We address this shortcoming in our work. First we identify factors that affect ViT latency-workload relationships. Second we determine token pruning schedule by leveraging non-linear latency-workload relationships. Third we demonstrate a training-free token pruning method utilizing this schedule. We show other methods may increase latency by 2-30% while we reduce latency by 9-26%. For similar latency (within 5.2% or 7ms) across devices we achieve 78.6%-84.5% ImageNet1K classification accuracy while the state-of-the-art Token Merging achieves 45.8%-85.4%.
Recommended Citation
Nicholas John Eliopoulos, Purvish Jajal, James C. Davis, Gaowen Liu, George K. Thiruvathukal, Yung-Hsiang Lu; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 7153-7162
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Copyright Statement
© The Author(s), 2024.
Comments
Author Posting © The Author(s), 2024. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for the watermark, it is identical to the accepted version of the article, which will be published in IEEE Xplore.