8297

CUDA-Zero: a framework for porting shared memory GPU applications to multi-GPUs

Chen DeHao, Chen WenGuang, Zheng WeiMin
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Science China Information Sciences, Volume 55, Number 3, 663-676, 2012

@article{chen2012cuda,

   title={CUDA-Zero: a framework for porting shared memory GPU applications to multi-GPUs},

   author={Chen, D.H. and Chen, W.G. and Zheng, W.M.},

   journal={Science China Information sciences},

   volume={55},

   number={3},

   pages={663–676},

   year={2012},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

1882

views

As the prevalence of general purpose computations on GPU, shared memory programming models were proposed to ease the pain of GPU programming. However, with the demanding needs of more intensive workloads, it’s desirable to port GPU programs to more scalable distributed memory environment, such as multi-GPUs. To achieve this, programs need to be re-written with mixed programming models (e.g. CUDA and message passing). Programmers not only need to work carefully on workload distribution, but also on scheduling mechanisms to ensure the efficiency of the execution. In this paper, we studied the possibilities of automating the process of parallelization to multi-GPUs. Starting from a GPU program written in shared memory model, our framework analyzes the access patterns of arrays in kernel functions to derive the data partition schemes. To acquire the access pattern, we proposed a 3-tiers approach: static analysis, profile based analysis and user annotation. Experiments show that most access patterns can be derived correctly by the first two tiers, which means that zero efforts are needed to port an existing application to distributed memory environment. We use our framework to parallelize several applications, and show that for certain kinds of applications, CUDA-Zero can achieve efficient parallelization in multi-GPU environment.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: