24930

Ripple: Simplified Large-Scale Computation on Heterogeneous Architectures with Polymorphic Data Layout

Robert Clucas, Philip Blakely, Nikolaos Nikiforakis
Maxwell Centre, Cavendish Laboratory, JJ Thomson Avenue, Cambridge, CB3 0HE
arXiv:2104.08571 [cs.DC], (17 Apr 2021)

@misc{clucas2021ripple,

   title={Ripple : Simplified Large-Scale Computation on Heterogeneous Architectures with Polymorphic Data Layout},

   author={Robert Clucas and Philip Blakely and Nikolaos Nikiforakis},

   year={2021},

   eprint={2104.08571},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

GPUs are now used for a wide range of problems within HPC. However, making efficient use of the computational power available with multiple GPUs is challenging. The main challenges in achieving good performance are memory layout, affecting memory bandwidth, effective use of the memory spaces with a GPU, inter-GPU communication, and synchronization. We address these problems with the Ripple library, which provides a unified view of the computational space across multiple dimensions and multiple GPUs, allows polymorphic data layout, and provides a simple graph interface to describe an algorithm from which inter-GPU data transfers can be optimally scheduled. We describe the abstractions provided by Ripple to allow complex computations to be described simply, and to execute efficiently across many GPUs with minimal overhead. We show performance results for a number of examples, from particle motion to finite-volume methods and the eikonal equation, as well as showing good strong and weak scaling results across multiple GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: