A Synchronization-Free Algorithm for Parallel Sparse Triangular Solves
Niels Bohr Institute, University of Copenhagen, Denmark
22nd International European Conference on Parallel and Distributed Computing (Euro-Par ’16), 2016
@inproceedings{liu2016synchronization,
title={A Synchronization-Free Algorithm for Parallel Sparse Triangular Solves$}$},
author={Liu, Weifeng and Li, Ang and Hogg, Jonathan and Duff, Iain S and Vinter, Brian},
booktitle={Euro-Par 2016: Parallel Processing: 22nd International Conference on Parallel and Distributed Computing, Grenoble, France, August 22-26, 2016, Proceedings$}$},
year={2016},
organization={Springer Berlin Heidelberg$}$}
}
The sparse triangular solve kernel, SpTRSV, is an important building block for a number of numerical linear algebra routines. Parallelizing SpTRSV on today’s manycore platforms, such as GPUs, is not an easy task since computing a component of the solution may depend on previously computed components, enforcing a degree of sequential processing. As a consequence, most existing work introduces a preprocessing stage to partition the components into a group of level-sets or colour-sets so that components within a set are independent and can be processed simultaneously during the subsequent solution stage. However, this class of methods requires a long preprocessing time as well as significant runtime synchronization overhead between the sets. To address this, we propose in this paper a novel approach for SpTRSV in which the ordering between components is naturally enforced within the solution stage. In this way, the cost for preprocessing can be greatly reduced, and the synchronizations between sets are completely eliminated. A comparison with the state-of-the-art library supplied by the GPU vendor, using 11 sparse matrices on the latest GPU device, show that our approach obtains an average speedup of 2.3 times in single precision and 2.14 times in double precision. The maximum speedups are 5.95 and 3.65, respectively. In addition, our method is an order of magnitude faster for the preprocessing stage than existing methods.
June 28, 2016 by hgpu