2201

A code motion technique for accelerating general-purpose computation on the GPU

Takatoshi Ikeda, Fumihiko Ino, and Kenichi Hagihar
Graduate School of Information Science and Technology, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan
20th International Parallel and Distributed Processing Symposium, 2006. IPDPS 2006.

@conference{ikeda2006code,

   title={A code motion technique for accelerating general-purpose computation on the GPU},

   author={Ikeda, T. and Ino, F. and Hagihara, K.},

   booktitle={Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. 20th International},

   pages={10},

   isbn={1424400546},

   year={2006},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

2717

views

Graphics processing units (GPUs) are providing increasingly higher performance with programmable internal processors, namely vertex processors (VPs) and fragment processors (FPs). Such newly added capabilities motivate us to perform general-purpose computation on GPUs (GPGPU) beyond graphics applications. Although VPs and FPs are connected in a pipeline, many GPGPU implementations utilize only FPs as a computational engine in the GPU. Therefore, such implementations may result in lower performance due to highly loaded FPs (as compared to VPs) being a performance bottleneck in the pipeline execution. The objective of our work is to improve the performance of GPGPU programs by eliminating this bottleneck. To achieve this, we present a code motion technique that is capable of reducing the FP workload by moving assembly instructions appropriately from the FP program to the VP program. We also present the definition of such movable instructions that do not change the I/O specification between the CPU and the GPU. The experimental results show that (1) our technique improves the performance of a Gaussian filter program with reducing execution time by approximately 40% and (2) it successfully reduces the FP workload in 10 out of 18 GPGPU programs
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: