10753

Efficient fine grained shared buffer management for multiple OpenCL devices

Chang-qing Xun, Dong Chen, Qiang Lan, Chun-yuan Zhang
Computer School, National University of Defense Technology, Changsha 410073, China
Journal of Zhejiang University-SCIENCE C, 14(11), 2013

@article{xun2013efficient,

   title={Efficient fine grained shared buffer management for multiple OpenCL devices},

   author={XUN, Chang-qing and CHEN, Dong and LAN, Qiang and ZHANG, Chun-yuan},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1815

views

OpenCL programming provides full code portability between different hardware platforms, and can serve as a good programming candidate for heterogeneous systems, which typically consist of a host processor and several accelerators. However, to make full use of the computing capacity of such a system, programmers are requested to manage diverse OpenCL-enabled devices explicitly, including distributing the workload between different devices and managing data transfer between multiple devices. All these tedious jobs pose a huge challenge for programmers. In this paper, a Distributed Shared OpenCL Memory (DSOM) is presented, which relieves users of having to manage data transfer explicitly, by supporting shared buffers across devices. DSOM allocates shared buffers in the system memory and treats the on-device memory as a software managed virtual cache buffer. To support fine grained shared buffer management, we designed a kernel parser in DSOM for buffer access range analysis. A basic modified, shared, invalid cache coherency is implemented for DSOM to maintain coherency for cache buffers. In addition, we propose a novel strategy to minimize communication cost between devices by launching each necessary data transfer as early as possible. This strategy enables overlap of data transfer with kernel execution. Our experimental results show that the applicability of our method for buffer access range analysis is good, and the efficiency of DSOM is high.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: