Parallel implementation of conjugate gradient method on graphics processors

Marcin Wozniak, Tomasz Olas, Roman Wyrzykowski
Institute of Computational and Information Sciences, Czestochowa University of Technology, Czestochowa, Poland
Parallel Processing and Applied Mathematics, Lecture Notes in Computer Science, 2010, Volume 6067/2010, 125-135


   title={Parallel implementation of conjugate gradient method on graphics processors},

   author={Wozniak, M. and Olas, T. and Wyrzykowski, R.},

   journal={Parallel Processing and Applied Mathematics},





Source Source   



Nowadays GPUs become extremely promising multi/many-core architectures for a wide range of demanding applications. Basic features of these architectures include utilization of a large number of relatively simple processing units which operate in the SIMD fashion, as well as hardware supported, advanced multithreading. However, the utilization of GPUs in an every-day practice is still limited, mainly because of necessity of deep adaptation of implemented algorithms to a target architecture. In this work, we propose how to perform such an adaptation to achieve an efficient parallel implementation of the conjugate gradient (CG) algorithm, which is widely used for solving large sparse linear systems of equations, arising e.g. in FEM problems. Aiming at efficient implementation of the main operation of the CG algorithm, which is sparse matrix-vector multiplication (SpMV), different techniques of optimizing access to the hierarchical memory of GPUs are proposed and studied. The experimental investigation of a proposed CUDA-based implementation of the CG algorithm is carried out on two GPU architectures: GeForce 8800 and Tesla C1060. It has been shown that optimization of access to GPU memory allows us to reduce considerably the execution time of the SpMV operation, and consequently to achieve a significant speedup over CPUs when implementing the whole CG algorithm.
Rating: 2.5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: