Using Graphics Processors to Accelerate the Solution of Out-of-Core Linear Systems
Depto. de Ingenieria y Ciencia de Computadores, Universidad Jaume I, 12.071-Castellon, Spain
Eighth International Symposium on Parallel and Distributed Computing, 2009. ISPDC ’09
@inproceedings{marques2009using,
title={Using graphics processors to accelerate the solution of out-of-core linear systems},
author={Marqu{‘e}s, M. and Quintana-Orti, G. and Quintana-Orti, E.S. and van de Geijn, R.},
booktitle={Parallel and Distributed Computing, 2009. ISPDC’09. Eighth International Symposium on},
pages={169–176},
year={2009},
organization={IEEE}
}
We investigate the use of graphics processors (GPUs) to accelerate the solution of large-scale linear systems when the problem data is larger than the main memory of the system and storage on disk is employed. Our solution addresses the programmability problem with a combination of the high-level approach in libflame (the FLAME library for dense linear algebra)and a run-time system that handles I/O transparently to the programmer. Results on a desktop computer equipped with an NVIDIA GPU reveal this platform as a cost-effective tool that yields high-performance for solving moderate to large-scale linear algebra problems. The computation of the Cholesky factorization is used to illustrate these techniques.
August 4, 2011 by hgpu