Profiling Concurrent Vision Inference Workloads on NVIDIA Jetson – Extended
ID Lab, University of Ghent – imec
arXiv:2508.08430 [cs.DC], (11 Aug 2025)
@misc{chakraborty2025profilingconcurrentvisioninference,
title={Profiling Concurrent Vision Inference Workloads on NVIDIA Jetson — Extended},
author={Abhinaba Chakraborty and Wouter Tavernier and Akis Kourtis and Mario Pickavet and Andreas Oikonomakis and Didier Colle},
year={2025},
eprint={2508.08430},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2508.08430}
}
The proliferation of IoT devices and advancements in network technologies have intensified the demand for real-time data processing at the network edge. To address these demands, low-power AI accelerators, particularly GPUs, are increasingly deployed for inference tasks, enabling efficient computation while mitigating cloud-based systems’ latency and bandwidth limitations. Despite their growing deployment, GPUs remain underutilised even in computationally intensive workloads. This underutilisation stems from the limited understanding of GPU resource sharing, particularly in edge computing scenarios. In this work, we conduct a detailed analysis of both high- and low-level metrics, including GPU utilisation, memory usage, streaming multiprocessor (SM) utilisation, and tensor core usage, to identify bottlenecks and guide hardware-aware optimisations. By integrating traces from multiple profiling tools, we provide a comprehensive view of resource behaviour on NVIDIA Jetson edge devices under concurrent vision inference workloads. Our findings indicate that while GPU utilisation can reach 100% under specific optimisations, critical low-level resources, such as SMs and tensor cores, often operate only at 15% to 30% utilisation. Moreover, we observe that certain CPU-side events, such as thread scheduling, context switching, etc., frequently emerge as bottlenecks, further constraining overall GPU performance. We provide several key observations for users of vision inference workloads on NVIDIA edge devices.
August 24, 2025 by hgpu
Your response
You must be logged in to post a comment.