SnuCL: an OpenCL framework for heterogeneous CPU/GPU clusters

Jungwon Kim, Sangmin Seo, Jun Lee, Jeongho Nah, Gangwon Jo, Jaejin Lee
Center for Manycore Programming, School of Computer Science and Engineering, Seoul National University, Seoul 151-744, Korea
26th ACM international conference on Supercomputing (ICS ’12), 2012


   title={SnuCL: an OpenCL framework for heterogeneous CPU/GPU clusters},

   author={Kim, J. and Seo, S. and Lee, J. and Nah, J. and Jo, G. and Lee, J.},

   booktitle={Proceedings of the 26th ACM international conference on Supercomputing},





Download Download (PDF)   View View   Source Source   Source codes Source codes




In this paper, we propose SnuCL, an OpenCL framework for heterogeneous CPU/GPU clusters. We show that the original OpenCL semantics naturally fits to the heterogeneous cluster programming environment, and the framework achieves high performance and ease of programming. The target cluster architecture consists of a designated, single host node and many compute nodes. They are connected by an interconnection network, such as Gigabit Ethernet and InfiniBand switches. Each compute node is equipped with multicore CPUs and multiple GPUs. A set of CPU cores or each GPU becomes an OpenCL compute device. The host node executes the host program in an OpenCL application. SnuCL provides a system image running a single operating system instance for heterogeneous CPU/GPU clusters to the user. It allows the application to utilize compute devices in a compute node as if they were in the host node. No communication API, such as the MPI library, is required in the application source. SnuCL also provides collective communication extensions to OpenCL to facilitate manipulating memory objects. With SnuCL, an OpenCL application becomes portable not only between heterogeneous devices in a single node, but also between compute devices in the cluster environment. We implement SnuCL and evaluate its performance using eleven OpenCL benchmark applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: