Heterogeneous Highly Parallel Implementation of Matrix Exponentiation Using GPU

Chittampally Vasanth Raja, Srinivas Balasubramanian, Prakash S. Raghavendra
Department of Information Technology, National Institute of Technology Karnataka, Surathkal, India
International Journal of Distributed and Parallel Systems (IJDPS) Vol.3, No.2, March 2012, arXiv:1204.3052v1 [cs.DC] (13 Apr 2012)


   author={Vasanth Raja}, C. and {Balasubramanian}, S. and {Raghavendra}, P.~S},

   title={"{Heterogeneous Highly Parallel Implementation of Matrix Exponentiation Using GPU}"},

   journal={ArXiv e-prints},




   keywords={Computer Science – Distributed, Parallel, and Cluster Computing},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   



The vision of super computer at every desk can be realized by powerful and highly parallel CPUs or GPUs or APUs. Graphics processors once specialized for the graphics applications only, are now used for the highly computational intensive general purpose applications. Very expensive GFLOPs and TFLOP performance has become very cheap with the GPGPUs. Current work focuses mainly on the highly parallel implementation of Matrix Exponentiation. Matrix Exponentiation is widely used in many areas of scientific community ranging from highly critical flight, CAD simulations to financial, statistical applications. Proposed solution for Matrix Exponentiation uses OpenCL for exploiting the hyper parallelism offered by the many core GPGPUs. It employs many general GPU optimizations and architectural specific optimizations. This experimentation covers the optimizations targeted specific to the Scientific Graphics cards (Tesla-C2050). Heterogeneous Highly Parallel Matrix Exponentiation method has been tested for matrices of different sizes and with different powers. The devised Kernel has shown 1000X speedup and 44 fold speedup with the naive GPU Kernel.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: