16028

Tensor Contractions with Extended BLAS Kernels on CPU and GPU

Yang Shi, U. N. Niranjan, Animashree Anandkumar, Cris Cecka
EECS Department
arXiv:1606.05696 [cs.DC], (17 Jun 2016)

@article{shi2016tensor,

   title={Tensor Contractions with Extended BLAS Kernels on CPU and GPU},

   author={Shi, Yang and Niranjan, U. N. and Anandkumar, Animashree and Cecka, Cris},

   year={2016},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

2535

views

Tensor contractions constitute a key computational ingredient of numerical multi-linear algebra. However, as the order and dimension of tensors grow, the time and space complexities of tensor-based computations grow quickly. Existing approaches for tensor contractions typically involves explicit copy and transpose operations. In this paper, we propose and evaluate a new BLAS-like primitive STRIDEDBATCHEDGEMM that is capable of performing a wide range of tensor contractions on CPU and GPU efficiently. Through systematic benchmarking, we demonstrate the advantages of our approach over conventional approaches. Concretely, we implement the Tucker decomposition and show that using our kernels yields 100x speedup as compared to the implementation using existing state-of-the-art libraries.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: