2705

Data access optimized applications on the GPU using NVIDIA CUDA

Dheevatsa Mudiger
Technische Universitat Munchen
Technische Universitat Munchen, 2009

@article{mudigere2009data,

   title={Data access optimized applications on the GPU using NVIDIA CUDA},

   author={Mudigere, D.},

   year={2009}

}

Download Download (PDF)   View View   Source Source   

2073

views

This work is an attempt to address the problem of bandwidth limited performance of data intensive GPGPU applications. Performance limited by memory bandwidth is common issue faced by general data intensive HPC applications. In case of the GPU, this problem is more pronounced owing to the unique architecture. This problem has been tackled by optimizing basic data rearrangement operations on the GPU. In this direction, methods and approaches have been identified and formulated for optimizing data rearrangement in general on GPU architectures. These are employed to develop near optimal and generic GPU kernels for a set of data rearrangement operations. In particular a library of GPU kernels has been developed, for operations that involve rearranging a generic m-dimensional data into n-dimensions. These kernels have been hand-tuned for maximum throughput, equaling upto 90% of the bandwidth utilization of the intrinsic memcpy function. These kernels are developed as templatized generic implementations allowing for their seamless integration into other existing applications. The target GPU architectures considered in this work are – NVIDIA Tesla c1060 and NVIDIA Tesla c870 and kernels have been developed using NVIDIA CUDA. All the kernels achieve or surpass best known performance in terms of bandwidth utilization. Furthermore, as a case study of a simple CFD Navier-Stokes based flow solver has been developed for the GPU, incorporating the optimal data rearrangement principles. This has been tested for the case of a 2D lid driven cavity flow. The GPU implementation is comprehensively compared with optimal serial and parallel CPU implementations on an Intel Nehalem X5550 platform. A maximum speedup of 252x as compared to the serial code on the CPU and 13x as compared to the parallel CPU code (16 MPI processes on8 cores of 2 Quad Nehalem X5550), has been attained.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

Featured events

2018
November
27-30
Hida Takayama, Japan

The Third International Workshop on GPU Computing and AI (GCA), 2018

2018
September
19-21
Nagoya University, Japan

The 5th International Conference on Power and Energy Systems Engineering (CPESE), 2018

2018
September
22-24
MediaCityUK, Salford Quays, Greater Manchester, England

The 10th International Conference on Information Management and Engineering (ICIME), 2018

2018
August
21-23
No. 1037, Luoyu Road, Hongshan District, Wuhan, China

The 4th International Conference on Control Science and Systems Engineering (ICCSSE), 2018

2018
October
29-31
Nanyang Executive Centre in Nanyang Technological University, Singapore

The 2018 International Conference on Cloud Computing and Internet of Things (CCIOT’18), 2018

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: