Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems
Electrical Engineering and Computer Science, University of Tennessee, Knoxville, Tennessee, U.S.A.
Concurrency and Computation: Practice and Experience, 2013
DOI:10.1002/cpe.3152
@article{yamazaki2013tridiagonalization,
title={Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems},
author={Yamazaki, Ichitaro and Dong, Tingxing and Solc{`a}, Raffaele and Tomov, Stanimire and Dongarra, Jack and Schulthess, Thomas},
journal={Concurrency and Computation: Practice and Experience},
year={2013},
publisher={Wiley Online Library}
}
For software to fully exploit the computing power of emerging heterogeneous computers, not only must the required computational kernels be optimized for the specific hardware architectures but also an effective scheduling scheme is needed to utilize the available heterogeneous computational units and to hide the communication between them. As a case study, we develop a static scheduling scheme for the tridiagonalization of a symmetric dense matrix on multicore CPUs with multiple graphics processing units (GPUs) on a single compute node.We then parallelize and optimize the Basic Linear Algebra Subroutines (BLAS)-2 symmetric matrix-vector multiplication, and the BLAS-3 low rank symmetric matrix updates on the GPUs.We demonstrate the good scalability of these multi-GPU BLAS kernels and the effectiveness of our scheduling scheme on twelve Intel Xeon processors and three NVIDIA GPUs. We then integrate our hybrid CPU-GPU kernel into computational kernels at higher-levels of software stacks, that is, a shared-memory dense eigensolver and a distributed-memory sparse eigensolver. Our experimental results show that our kernels greatly improve the performance of these higher-level kernels, not only reducing the solution time but also enabling the solution of larger-scale problems. Because such symmetric eigenvalue problems arise in many scientific and engineering simulations, our kernels could potentially lead to new scientific discoveries. Furthermore, these dense linear algebra algorithms present algorithmic characteristics that can be found in other algorithms. Hence, they are not only important computational kernels on their own but also useful testbeds to study the performance of the emerging computers and the effects of the various optimization techniques.
November 2, 2013 by hgpu