15328

Exploring LLVM Infrastructure for Simplified Multi-GPU Programming

Alexander Matz, Mark Hummel, Holger Froning
Ruprecht-Karls University of Heidelberg, Germany
Ninth International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG-2016), 2016
@article{matz2016exploring,

   title={Exploring LLVM Infrastructure for Simplified Multi-GPU Programming},

   author={Matz, Alexander and Hummel, Mark and Fr{"o}ning, Holger},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

340

views

GPUs have established themselves in the computing landscape, convincing users and designers by their excellent performance and energy efficiency. They differ in many aspects from general-purpose CPUs, for instance their highly parallel architecture, their thread-collective bulk-synchronous execution model, and their programming model. In particular, languages like CUDA or OpenCL require users to express parallelism very fine-grained but also highly structured in hierarchies, and to express locality very explicitly. We leverage these observations for deriving a methodology to scale out single-device programs to an execution on multiple devices, aggregating compute and memory resources. Our approach comprises three steps: 1. Collect information about data dependency and memory access patterns using static code analysis 2. Merge information in order to choose an appropriate partitioning strategy 3. Apply code transformations to implement the chosen partitioning and insert calls to a dynamic runtime library. We envision a tool that allows a user write a single-device program that utilizes an arbitrary number of GPUs, either within one machine boundary or distributed at cluster level. In this work, we introduce our concept and tool chain for regular workloads. We present results from early experiments that further motivate our work and provide a discussion on related opportunities and future directions.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1474941457
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1474941457
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => 80+47erYuqbSxpZwfp9IlgGKXZQ=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

1997 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: