19337

A Parallel Sparse Tensor Benchmark Suite on CPUs and GPUs

Jiajia Li, Mahesh Lakshminarasimhan, Xiaolong Wu, Ang Li, Catherine Olschanowsky, Kevin Barker
Pacific Northwest National Laboratory, 902 Battelle Blvd, Richland, WA, USA
arXiv:2001.00660 [cs.DC], (2 Jan 2020)

@misc{li2020parallel,

   title={A Parallel Sparse Tensor Benchmark Suite on CPUs and GPUs},

   author={Jiajia Li and Mahesh Lakshminarasimhan and Xiaolong Wu and Ang Li and Catherine Olschanowsky and Kevin Barker},

   year={2020},

   eprint={2001.00660},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Tensor computations present significant performance challenges that impact a wide spectrum of applications ranging from machine learning, healthcare analytics, social network analysis, data mining to quantum chemistry and signal processing. Efforts to improve the performance of tensor computations include exploring data layout, execution scheduling, and parallelism in common tensor kernels. This work presents a benchmark suite for arbitrary-order sparse tensor kernels using state-of-the-art tensor formats: coordinate (COO) and hierarchical coordinate (HiCOO) on CPUs and GPUs. It presents a set of reference tensor kernel implementations that are compatible with real-world tensors and power law tensors extended from synthetic graph generation techniques. We also propose Roofline performance models for these kernels to provide insights of computer platforms from sparse tensor view.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: