28896

KeSCo: Compiler-based Kernel Scheduling for Multi-task GPU Applications

Zejia Lin, Zewei Mo, Xuanteng Huang, Xianwei Zhang, Yutong Lu
Sun Yat-sen University, Guangzhou, China
The 41st IEEE International Conference on Computer Design (ICCD’23), 2023

@article{lin2023kesco,

   title={KeSCo: Compiler-based Kernel Scheduling for Multi-task GPU Applications},

   author={Lin, Zejia and Mo, Zewei and Huang, Xuanteng and Zhang, Xianwei and Lu, Yutong},

   year={2023}

}

Download Download (PDF)   View View   Source Source   

801

views

Nowadays, Graphics Processing Units (GPUs) dominate in a wide spectrum of computing realms and multi-task is increasingly applied in various complicated applications. To gain higher performance, multi-task programs require cumbersome programming efforts to take advantage of inter-kernel concurrency at source-code level. Although there exist works automatically scheduling kernels to enable inter-kernel concurrency, they all inevitably introduce new programming frameworks and some even bring significant performance downgrade compared to the expertise-based optimizations. To address this issue, we propose KeSCo, a compiler-based scheduler to expose kernel level concurrency in multi-task programs with trivial code modification. In compilation, KeSCo applies a strategy to schedule kernels in task queues, accounting for both load balance and synchronization cost. Also, KeSCo utilizes a customized algorithm designed for computational flow to remove redundant synchronizations. The design is further extended to support multiprocess scenario, where multiple GPU processes are sharing a single context. Evaluations on representative benchmarks show that the proposed approach gains a 1.28x average speedup for multi-task scenario (1.22x for multi-process). Even with lessened programming efforts, our proposed design outperforms two state-of-the-arts GrSched and Taskflow by 1.31x and 1.16x on average, respectively.
Rating: 5.0/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: