28005

Harmonic CUDA: Asynchronous Programming on GPUs

Jonathan Wapman, Sean Treichler, Serban D. Porumbescu, John D. Owens
University of California, Davis, Davis, California, USA
Proceedings of the 14th International Workshop on Programming Models and Applications for Multicores and ManycoresFebruary (PMAM’23), 2023

@inproceedings{wapman2023harmonic,

   title={Harmonic CUDA: Asynchronous Programming on GPUs},

   author={Wapman, Jonathan D and Treichler, Sean and Porumbescu, Serban D and Owens, John D},

   booktitle={Proceedings of the 14th International Workshop on Programming Models and Applications for Multicores and Manycores},

   pages={39–49},

   year={2023}

}

Download Download (PDF)   View View   Source Source   

596

views

We introduce Harmonic CUDA, a dataflow programming model for GPUs that allows programmers to describe algorithms as a dependency graph of producers and consumers where data flows continuously through the graph for the duration of the kernel. This makes it easier for programmers to exploit asynchrony, warp specialization, and hardware acceleration. Using Harmonic CUDA, we implement two example applications: Matrix Multiplication and GraphSage. The matrix multiplication kernel demonstrates how a key kernel can break down into more granular building blocks, with results that show a geomean average of 80% of cuBLAS performance, and up to 92% when omitting small matrices, as well as an analysis of how to improve performance in the future. GraphSage shows how asynchrony and warp specialization can provide significant performance improvements by reusing the same building blocks as the matrix multiplication kernel. We show performance improvements of 34% by changing to a warp-specialized version compared to a bulk-synchronous implementation. This paper evaluates the strengths and weaknesses of Harmonic CUDA based on these test cases and suggests future work to improve the programming model.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: