9063

Kernelet: High-Throughput GPU Kernel Executions with Dynamic Slicing and Scheduling

Jianlong Zhong, Bingsheng He
School of Computer Engineering, Nanyang Technological University, Singapore, 639798
arXiv:1303.5164 [cs.DC], (21 Mar 2013)

@article{2013arXiv1303.5164Z,

   author={Zhong}, J. and {He}, B.},

   title={"{Kernelet: High-Throughput GPU Kernel Executions with Dynamic Slicing and Scheduling}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1303.5164},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing, I.3.1, D.1.3, C.4},

   year={2013},

   month={mar},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1303.5164Z},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

2298

views

Graphics processors, or GPUs, have recently been widely used as accelerators in the shared environments such as clusters and clouds. In such shared environments, many kernels are submitted to GPUs from different users, and throughput is an important metric for performance and total ownership cost. Despite the recently improved runtime support for concurrent GPU kernel executions, the GPU can be severely underutilized, resulting in suboptimal throughput. In this paper, we propose Kernelet, a runtime system with dynamic slicing and scheduling techniques to improve the throughput of concurrent kernel executions on the GPU. With slicing, Kernelet divides a GPU kernel into multiple sub-kernels (namely slices). Each slice has tunable occupancy to allow co-scheduling with other slices and to fully utilize the GPU resources. We develop a novel and effective Markov chain based performance model to guide the scheduling decision. Our experimental results demonstrate up to 31.1% and 23.4% performance improvement on NVIDIA Tesla C2050 and GTX680 GPUs, respectively.
Rating: 2.4/5. From 11 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: