18711

Auto-tuned OpenCL kernel co-execution in OmpSs for heterogeneous systems

B. P’erez, E. Stafford, J. L. Bosque, R. Beivide, S. Mateo, X. Teruel, X. Martorell, E. Ayguad’e
Department of Computer Science and Electronics. Universidad de Cantabria. Santander, Spain
Journal of parallel and distributed computing, 2019

@article{perez2019auto,

   title={Auto-tuned OpenCL kernel co-execution in OmpSs for heterogeneous systems},

   author={P{‘e}rez, Borja and Stafford, Esteban and Bosque, JL and Beivide, R and Mateo, S and Teruel, X and Martorell, X and Ayguad{‘e}, E},

   journal={Journal of Parallel and Distributed Computing},

   volume={125},

   pages={45–57},

   year={2019},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   

706

views

The emergence of heterogeneous systems has been very notable recently. The nodes of the most powerful computers integrate several compute accelerators, like GPUs. Profiting from such node configurations is not a trivial endeavour. OmpSs is a framework for task based parallel applications, that allows the execution of OpenCl kernels on different compute devices. However, it does not support the co-execution of a single kernel on several devices. This paper presents an extension of OmpSs that rises to this challenge, and presents Auto-Tune, a load balancing algorithm that automatically adjusts its internal parameters to suit the hardware capabilities and application behavior. The extension allows programmers to take full advantage of the computing devices with negligible impact on the code. It takes care of two main issues. First, the automatic distribution of datasets and the management of device memory address spaces. Second, the implementation of a set of load balancing algorithms to adapt to the particularities of applications and systems. Experimental results reveal that the co-execution of single kernels on all the devices in the node is beneficial in terms of performance and energy consumption, and that Auto-Tune gives the best overall results.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: