Efficiently Computing Tensor Eigenvalues on a GPU
Comput. Sci. Dept., UC Berkeley, Berkeley, CA, USA
IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW), 2011
@article{ballard2011efficiently,
title={Efficiently Computing Tensor Eigenvalues on a GPU},
author={Ballard, G. and Kolda, T. and Plantenga, T.},
year={2011},
publisher={tech. report, Sandia National Laboratories, Albuquerque, NM and Livermore, CA}
}
The tensor eigenproblem has many important applications, generating both mathematical and application-specific interest in the properties of tensor eigenpairs and methods for computing them. A tensor is an m-way array, generalizing the concept of a matrix (a 2-way array). Kolda and Mayo have recently introduced a generalization of the matrix power method for computing real-valued tensor eigenpairs of symmetric tensors. In this work, we present an efficient implementation of their algorithm, exploiting symmetry in order to save storage, data movement, and computation. For an application involving repeatedly solving the tensor eigenproblem for many small tensors, we describe how a GPU can be used to accelerate the computations. On an NVIDIA Tesla C 2050 (Fermi) GPU, we achieve 318 Gflops/s (31% of theoretical peak performance in single precision) on our test data set.
November 12, 2011 by hgpu