9352

libWater: Heterogeneous Distributed Computing Made Easy

Ivan Grasso, Simone Pellegrini, Biagio Cosenza, Thomas Fahringer
Institute of Computer Science, University of Innsbruck, Austria
27th ACM international conference on Supercomputing, 2013

@inproceedings{kofler2013automatic,

   title={An Automatic Input-Sensitive Approach for Heterogeneous Task Partitioning},

   author={Kofler, Klaus and Grasso, Ivan and Cosenza, Biagio and Fahringer, Thomas},

   year={2013},

   organization={ICS}

}

Download Download (PDF)   View View   Source Source   

5899

views

Clusters of heterogeneous nodes composed of multi-core CPUs and GPUs are increasingly being used for High Performance Computing (HPC) due to the benefits in peak performance and energy efficiency. In order to fully harvest the computational capabilities of such architectures, application developers often employ a combination of different parallel programming paradigms (e.g. OpenCL, CUDA, MPI and OpenMP), also known in literature as hybrid programming, which makes application development very challenging. Furthermore, these languages offer limited support to orchestrate data and computations for heterogeneous systems. In this paper, we present libWater, a uniform approach for programming distributed heterogeneous computing systems. It consists of a simple interface, compliant with the OpenCL programming model, and a runtime system which extends the capabilities of OpenCL beyond single platforms and single compute nodes. libWater enhances the OpenCL event system by enabling inter-context and inter-node device synchronization. Furthermore, libWater’s runtime system uses dependency information enforced by event synchronization to dynamically build a DAG of enqueued commands which enables a class of advanced runtime optimizations. The detection and optimization of collective communication patterns is an example which, as shown by experimental results, improves the efficiency of the libWater runtime system for several application codes.
Rating: 1.8/5. From 6 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: