14508

Exploiting Hyper-Loop Parallelism in Vectorization to Improve Memory Performance on CUDA GPGPU

Shixiong Xu, David Gregg
Software Tools Group, Department of Computer Science, Trinity College, The University of Dublin, Ireland
2015 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA), 2015

@article{xu2015exploiting,

   title={Exploiting Hyper-Loop Parallelism in Vectorization to Improve Memory Performance on CUDA GPGPU},

   author={Xu, Shixiong and Gregg, David},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

1537

views

Memory performance is of great importance to achieve high performance on the Nvidia CUDA GPU. Previous work has proposed specific optimizations such as thread coarsening, caching data in shared memory, and global data layout transformation. We argue that vectorization based on hyper loop parallelism can be used as a unified technique to optimize the memory performance. In this paper, we put forward a compiler framework based on the Cetus source-tosource compiler to improve the memory performance on the CUDA GPU by efficiently exploiting hyper loop parallelism in vectorization. We introduce abstractions of SIMD vectors and SIMD operations that match the execution model and memory model of the CUDA GPU, along with three different execution mapping strategies for efficiently offloading vectorized code to CUDA GPUs. In addition, as we employ the vectorization in C-to-CUDA with automatic parallelization, our technique further refines the mapping granularity between coarse-grain loop parallelism and GPU threads. We evaluated our proposed technique on two platforms, an embedded GPU system – Jetson TK1 – and a desktop GPU – GeForce GTX 645. The experimental results demonstrate that our vectorization technique based on hyper loop parallelism can yield performance speedups up to 2.5x compared to the direct coarse-grain loop parallelism mapping.
Rating: 2.0/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: