Experiences with Mapping Non-linear Memory Access Patterns into GPUs
Department of Computer Architecture, University of Malaga, Spain
Computational Science – ICCS 2009, Lecture Notes in Computer Science, 2009, Volume 5544/2009, 924-933
@article{gutierrez2009experiences,
title={Experiences with mapping non-linear memory access patterns into GPUs},
author={Gutierrez, E. and Romero, S. and Trenas, M. and Plata, O.},
journal={Computational Science–ICCS 2009},
pages={924–933},
year={2009},
publisher={Springer}
}
Modern Graphics Processing Units (GPU) are very powerful computational systems on a chip. For this reason there is a growing interest in using these units as general purpose hardware accelerators (GPGPU). To facilitate the programming of general purpose applications, NVIDIA introduced the CUDA programming environment. CUDA provides a simplified abstraction of the underlying complex GPU architecture, so as a number of critical optimizations must be applied to the code in order to get maximum performance. In this paper we discuss our experience in porting an application kernel to the GPU, and all classes of design decisions we adopted in order to obtain maximum performance.
September 8, 2011 by hgpu