An Automatic Input-Sensitive Approach for Heterogeneous Task Partitioning

Klaus Kofler, Ivan Grasso, Biagio Cosenza, Thomas Fahringer
Institute of Computer Science, University of Innsbruck, Austria
27th ACM international conference on Supercomputing, 2013


   author={Kofler, Klaus and Grasso, Ivan and Cosenza, Biagio and Fahringer, Thomas},

   title={An Automatic Input-Sensitive Approach for Heterogeneous Task Partitioning},

   booktitle={Proceedings of the 27th ACM international conference on Supercomputing},

   series={ICS ’13},


   location={Eugene, Oregon, USA},


   address={New York, NY, USA},

   keywords={heterogeneous computing, compilers, GPU, task partitioning, code analysis, machine learning, runtime system}


Download Download (PDF)   View View   Source Source   



Unleashing the full potential of heterogeneous systems, consisting of multi-core CPUs and GPUs, is a challenging task due to the difference in processing capabilities, memory availability, and communication latencies of different computational resources. In this paper we propose a novel approach that automatically optimizes task partitioning for different (input) problem sizes and different heterogeneous architectures. We use the Insieme source-to-source compiler to translate a single-device OpenCL program into a multi-device OpenCL program. The Insieme Runtime System then performs dynamic task partitioning based on an offline-generated prediction model. In order to derive the prediction model, we use a machine learning approach based on Artificial Neural Networks (ANN) that incorporates static program features as well as dynamic, input sensitive features. Principal component analysis have been used to further improve the task partitioning. Our approach has been evaluated over a suite of 23 programs and respectively achieves a performance improvement of 22% and 25% compared to an execution of the benchmarks on a single CPU and a single GPU which is equal to 87.5% of the optimal performance.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: