26257

Lightning: Scaling the GPU Programming Model Beyond a Single GPU

Stijn Heldens, Pieter Hijma, Ben van Werkhoven, Jason Maassen, Rob V. van Nieuwpoort
Netherlands eScience Center
arXiv:2202.05549 [cs.DC], (11 Feb 2022)

@misc{heldens2022lightning,

   title={Lightning: Scaling the GPU Programming Model Beyond a Single GPU},

   author={Stijn Heldens and Pieter Hijma and Ben van Werkhoven and Jason Maassen and Rob. V. van Nieuwpoort},

   year={2022},

   eprint={2202.05549},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

The GPU programming model is primarily designed to support the development of applications that run on one GPU. However, just a single GPU is limited in its capabilities in terms of memory capacity and compute power. To handle large problems that exceed these capabilities, one must rewrite application code to manually transfer data between GPU memory and higher-level memory and/or distribute the work across multiple GPUs, possibly in multiple nodes. This means a large engineering effort is required to scale GPU applications beyond a single GPU. We present Lightning: a framework that follows the common GPU programming paradigm, but enables scaling to larger problems. Lightning enables multi-GPU execution of GPU kernels, even across multiple nodes, and seamlessly spills data to main memory and disk when required. Existing CUDA kernels can easily be adapted for use in Lightning, with data access annotations on these kernels allowing Lightning to infer their data requirements and dependencies. Lightning efficiently distributes the work/data across GPUs and maximizes efficiency by overlapping scheduling, data movement, and work when possible. We present the design and implementation of Lightning, as well as experimental results on up to 32 GPUs for eight benchmarks and an application from geospatial clustering. Evaluation shows excellent performance on problem sizes that far exceed the memory capacity of a single GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: