7631

Fine-Grained Resource Sharing for Concurrent GPGPU Kernels

Chris Gregg, Jonathan Dorn, Kim Hazelwood, Kevin Skadron
Department of Computer Science, University of Virginia, PO Box 400740
4th USENIX Workshop on Hot Topics in Parallelism (HotPar’12), 2012

@article{gregg2012fine,

   title={Fine-Grained Resource Sharing for Concurrent GPGPU Kernels},

   author={Gregg, Chris and Dorn, Jonathan and Hazelwood, Kim and Skadron, Kevin},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

1890

views

General purpose GPU (GPGPU) programming frameworks such as OpenCL and CUDA allow running individual computation kernels sequentially on a device. However, in some cases it is possible to utilize device resources more efficiently by running kernels concurrently. This raises questions about load balancing and resource allocation that have not previously warranted investigation. For example, what kernel characteristics impact the optimal partitioning of resources among concurrently executing kernels? Current frameworks do not provide the ability to easily run kernels concurrently with fine-grained and dynamic control over resource partitioning. We present KernelMerge, a kernel scheduler that runs two OpenCL kernels concurrently on one device. KernelMerge furnishes a number of settings that can be used to survey concurrent or single kernel configurations, and to investigate how kernels interact and influence each other, or themselves. KernelMerge provides a concurrent kernel scheduler compatible with the OpenCL API. We present an argument on the benefits of running kernels concurrently. We demonstrate how to use KernelMerge to increase throughput for two kernels that efficiently use device resources when run concurrently, and we establish that some kernels show worse performance when running concurrently. We also outline a method for using KernelMerge to investigate how concurrent kernels influence each other, with the goal of predicting runtimes for concurrent execution from individual kernel runtimes. Finally, we suggest GPU architectural changes that would improve such concurrent schedulers in the future.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: