DySel: Lightweight Dynamic Selection for Kernel-based Data-parallel Programming Model

Li-Wen Chang, Hee-Seok Kim, Wen-mei Hwu
University of Illinois
21th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’16), 2016


   title={A programming system for future proofing performance critical libraries},

   author={Chang, Li-Wen and El Hajj, Izzat and Kim, Hee-Seok and G{‘o}mez-Luna, Juan and Dakkak, Abdul and Hwu, Wen-mei},

   booktitle={Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},





Download Download (PDF)   View View   Source Source   



The rising pressure for simultaneously improving performance and reducing power is driving more diversity into all aspects of computing devices. An algorithm that is wellmatched to the target hardware can run multiple times faster and more energy efficiently than one that is not. The problem is complicated by the fact that a program’s input also affects the appropriate choice of algorithm. As a result, software developers have been faced with the challenge of determining the appropriate algorithm for each potential combination of target device and data. This paper presents DySel, a novel runtime system for automating such determination for kernel-based data parallel programming models such as OpenCL, CUDA, OpenACC, and C++AMP. These programming models cover many applications that demand high performance in mobile, cloud and high-performance computing. DySel systematically deploys candidate kernels on a small portion of the actual data to determine which achieves the best performance for the hardware-data combination. The test-deployment, referred to as micro-profiling, contributes to the final execution result and incurs less than 8% of overhead in the worst observed case when compared to an oracle. We show four major use cases where DySel provides significantly more consistent performance without tedious effort from the developer.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: