18376

Revisiting Online Autotuning for Sparse-Matrix Vector Multiplication Kernels on High-Performance Accelerators

Simon Garcia De Gonzalo, Wen-Mei Hwu, Simon D. Hammond, Christian R. Trott
Department of Computer Science, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
International Parallel and Distributed Processing Symposium, 2017

@techreport{garcia2017revisiting,

   title={Revisiting Online Autotuning for Sparse-Matrix Vector Multiplication Kernels on High-Performance Accelerators.},

   author={Garcia De Gonzalo, Simon and Huw, Wen-Mei and Hammond, Simon David and Trott, Christian Robert},

   institution={Sandia National Lab.(SNL-NM), Albuquerque, NM (United States)},

   year={2017}

}

Download Download (PDF)   View View   Source Source   

1555

views

Kokkos [1], [2] is a C++ programming model that offers the ability to write portable code that targets a wide degree of parallelism found in current HPC systems. It works by providing abstractions for parallel execution and data layouts that are mapped to different hardware resources during compilation. Some parameters, such as the size of thread teams and vector width, are available for the application scientist to tune their code to a particular hardware platform. For many applications, choosing the right parameters can be highly dependent on the data input or application configuration, that the device characteristics themselves. Sparse-Matrix Vector products (SpMV) are highly irregular computational kernels that can be found in a diverse collection of high-performance science applications. Performance for this important kernel is often highly correlated with the associated matrix sparsity as this ultimately governs the granularity, and therefore, efficiency of the memory system being used. In this paper, we propose to extend the current set of Kokkos profiling tools with an autotuner that can iterate over possible choices for thread-team size and vector width, taking advantage of runtime information to choose the optimal parameters for a particular input. This approach allows an iterative application that calls the same kernel multiple times to continue to progress towards a solution while, at the same time, alleviating the burden from the application programmer of knowing details of the underlying hardware and accounting for variable inputs. We compare an approach using the autotuner against a fixed approach, that attempts to use all the hardware resources all the time, and show that the optimal choice made by the autotuner is significantly different among the two latest classes of accelerator architectures. After 100 iterations we identify which subset of the matrices benefit from improved performance, while others are near the break-even point, where the overhead of the tool has been completely hidden. We highlight the properties of sparse matrices that can help determine when autotuning will be of benefit. Finally, we connect the overhead of the autotuner to specific sparsity patterns and hardware resources.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: