Iterative Statistical Kernels on Contemporary GPUs

Thilina Gunarathne, Bimalee Salpitikorala, Arun Chauhan, Geoffrey Fox
School of Informatics and Computing, Indiana University, Bloomington, IN 47405, USASchool of Informatics and Computing, Indiana University, Bloomington, IN 47405, USA
International Journal of Computational Science and Engineering, Vol.8, 58-77, 2013


   title={Iterative statistical kernels on contemporary GPUs},

   author={Gunarathne, Thilina and Salpitikorala, Bimalee and Chauhan, Arun and Fox, Geoffrey},


   journal={International Journal of Computational Science and Engineering},





Download Download (PDF)   View View   Source Source   



We present a study of three important kernels that occur frequently in iterative statistical applications: Multi-Dimensional Scaling (MDS), PageRank, and K-Means. We implemented each kernel using OpenCL and evaluated their performance on NVIDIA Tesla and NVIDIA Fermi GPGPU cards using dedicated hardware, and in the case of Fermi, also on the Amazon EC2 cloud-computing environment. By examining the underlying algorithms and empirically measuring the performance of various components of the kernels we explored the optimization of these kernels by four main techniques: (1) caching invariant data in GPU memory across iterations, (2) selectively placing data in different memory levels, (3) rearranging data in memory, and (4) dividing the work between the GPU and the CPU. We also implemented a novel algorithm for MDS and a novel data layout scheme for PageRank. Our optimizations resulted in performance improvements of up to 5X to 6X, compared to naive OpenCL implementations and up to 100X improvement over single-core CPU. We believe that these categories of optimizations are also applicable to other similar kernels. Finally, we draw several lessons that would be useful in not only implementing other similar kernels with OpenCL, but also in devising code generation strategies in compilers that target GPGPUs through OpenCL.
Rating: 2.5/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: