FELARE: Fair Scheduling of Machine Learning Applications on Heterogeneous Edge Systems

Ali Mokhtari, Pooyan Jamshidi, Mohsen Amini Salehi
School of Computing and Informatics, University of Louisiana at Lafayette, USA
arXiv:2206.00065 [cs.DC], (31 May 2022)




   author={Mokhtari, Ali and Jamshidi, Pooyan and Salehi, Mohsen Amini},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), Machine Learning (cs.LG), Performance (cs.PF), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={FELARE: Fair Scheduling of Machine Learning Applications on Heterogeneous Edge Systems},



   copyright={Creative Commons Zero v1.0 Universal}


Edge computing enables smart IoT-based systems via concurrent and continuous execution of latency-sensitive machine learning (ML) applications. These edge-based machine learning systems are often battery-powered (i.e., energy-limited). They use heterogeneous resources with diverse computing performance (e.g., CPU, GPU, and/or FPGAs) to fulfill the latency constraints of ML applications. The challenge is to allocate user requests for different ML applications on the Heterogeneous Edge Computing Systems (HEC) with respect to both the energy and latency constraints of these systems. To this end, we study and analyze resource allocation solutions that can increase the on-time task completion rate while considering the energy constraint. Importantly, we investigate edge-friendly (lightweight) multi-objective mapping heuristics that do not become biased toward a particular application type to achieve the objectives; instead, the heuristics consider "fairness" across the concurrent ML applications in their mapping decisions. Performance evaluations demonstrate that the proposed heuristic outperforms widely-used heuristics in heterogeneous systems in terms of the latency and energy objectives, particularly, at low to moderate request arrival rates. We observed 8.9% improvement in on-time task completion rate and 12.6% in energy-saving without imposing any significant overhead on the edge system.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: