STuning-DL: Model-Driven Autotuning of Sparse GPU Kernels for Deep Learning
CITIC, Computer Architecture Group, University of A Coruña, 15071 A Coruña, Spain
IEEE Access, Volume: 12, 2024
@article{castro2024stuning,
title={STuning-DL: Model-Driven Autotuning of Sparse GPU kernels for Deep Learning},
author={Castro, Roberto L and Andrade, Diego and Fraguela, Basilio B},
journal={IEEE Access},
year={2024},
publisher={IEEE}
}
The relentless growth of modern Machine Learning models has spurred the adoption of sparsification techniques to simplify their architectures and reduce the computational demands. Network pruning has demonstrated success in maintaining original network accuracy while shedding significant portions of the original weights. However, leveraging this sparsity efficiently remains challenging due to computational irregularities, particularly in GPU kernels. A new trend of template-based GPU kernels for semi-structured sparsity shows promise in efficiency but lacks autotuning capabilities to adapt to input dynamics, often underperforming in scenarios where they have not been meticulously hand-tuned. We present STuning-DL, the first pruning-aware autotuner for third-party template-based implementations enabling efficient optimization of sparse kernels for Deep Learning, spanning from high-level aspects (CUDA C++ level) down to GPU-native instructions specifics (assembly-level). STuning-DL tunes and optimizes at run-time sparse kernels’ performance for each input problem, yielding speedups of up to 5.42× on NVIDIA T4-16GB and up to 3.6× on NVIDIA A100-40GB GPU in sparse matrices from real world models compared to existing heuristics from sparse libraries like cuSparse and cuSparseLt.
May 26, 2024 by hgpu