15328

Exploring LLVM Infrastructure for Simplified Multi-GPU Programming

Alexander Matz, Mark Hummel, Holger Froning
Ruprecht-Karls University of Heidelberg, Germany
Ninth International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG-2016), 2016

@article{matz2016exploring,

   title={Exploring LLVM Infrastructure for Simplified Multi-GPU Programming},

   author={Matz, Alexander and Hummel, Mark and Fr{"o}ning, Holger},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1548

views

GPUs have established themselves in the computing landscape, convincing users and designers by their excellent performance and energy efficiency. They differ in many aspects from general-purpose CPUs, for instance their highly parallel architecture, their thread-collective bulk-synchronous execution model, and their programming model. In particular, languages like CUDA or OpenCL require users to express parallelism very fine-grained but also highly structured in hierarchies, and to express locality very explicitly. We leverage these observations for deriving a methodology to scale out single-device programs to an execution on multiple devices, aggregating compute and memory resources. Our approach comprises three steps: 1. Collect information about data dependency and memory access patterns using static code analysis 2. Merge information in order to choose an appropriate partitioning strategy 3. Apply code transformations to implement the chosen partitioning and insert calls to a dynamic runtime library. We envision a tool that allows a user write a single-device program that utilizes an arbitrary number of GPUs, either within one machine boundary or distributed at cluster level. In this work, we introduce our concept and tool chain for regular workloads. We present results from early experiments that further motivate our work and provide a discussion on related opportunities and future directions.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: