27334

Towards Performance Portable Programming for Distributed Heterogeneous Systems

Polykarpos Thomadakis, Nikos Chrisochoides
Department of Computer Science, Old Dominion University, Norfolk, Virginia
arXiv:2210.01238 [cs.DC], (3 Oct 2022)

@misc{https://doi.org/10.48550/arxiv.2210.01238,

   doi={10.48550/ARXIV.2210.01238},

   url={https://arxiv.org/abs/2210.01238},

   author={Thomadakis, Polykarpos and Chrisochoides, Nikos},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Towards Performance Portable Programming for Distributed Heterogeneous Systems},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution 4.0 International}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

309

views

Hardware heterogeneity is here to stay for high-performance computing. Large-scale systems are currently equipped with multiple GPU accelerators per compute node and are expected to incorporate more specialized hardware in the future. This shift in the computing ecosystem offers many opportunities for performance improvement; however, it also increases the complexity of programming for such architectures. This work introduces a runtime framework that enables effortless programming for heterogeneous systems while efficiently utilizing hardware resources. The framework is integrated within a distributed and scalable runtime system to facilitate performance portability across heterogeneous nodes. Along with the design, this paper describes the implementation and optimizations performed, achieving up to 300% improvement in a shared memory benchmark and up to 10 times in distributed device communication. Preliminary results indicate that our software incurs low overhead and achieves 40% improvement in a distributed Jacobi proxy application while hiding the idiosyncrasies of the hardware.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2023 hgpu.org

All rights belong to the respective authors

Contact us: