11166

Developing a High Performance Software Library with MPI and CUDA for Matrix Computations

Bogdan Oancea, Tudorel Andrei
"Nicolae Titulescu" University of Bucharest
Computational Methods in Social Sciences (CMSS), Vol. I, Issue 2/2013, 2013

@article{oancea2013developing,

   title={Developing a High Performance Software Library with MPI and CUDA for Matrix Computations},

   author={Oancea, Bogdan and Andrei, Tudorel},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

669

views

Nowadays, the paradigm of parallel computing is changing. CUDA is now a popular programming model for general purpose computations on GPUs and a great number of applications were ported to CUDA obtaining speedups of orders of magnitude comparing to optimized CPU implementations. Hybrid approaches that combine the message passing model with the shared memory model for parallel computing are a solution for very large applications. We considered a heterogeneous cluster that combines the CPU and GPU computations using MPI and CUDA for developing a high performance linear algebra library. Our library deals with large linear systems solvers because they are a common problem in the fields of science and engineering. Direct methods for computing the solution of such systems can be very expensive due to high memory requirements and computational cost. An efficient alternative are iterative methods which computes only an approximation of the solution. In this paper we present an implementation of a library that uses a hybrid model of computation using MPI and CUDA implementing both direct and iterative linear systems solvers. Our library implements LU and Cholesky factorization based solvers and some of the non-stationary iterative methods using the MPI/CUDA combination. We compared the performance of our MPI/CUDA implementation with classic programs written to be run on a single CPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: