1736

StarPU: A Unified Platform for Task Scheduling on Heterogeneous Multicore Architectures

Cedric Augonnet, Samuel Thibault, Raymond Namyst, Pierre-Andre Wacrenier
University of Bordeaux, LaBRI, INRIA Bordeaux Sud-Ouest
In Euro-Par 2009 Parallel Processing, Vol. 5704 (2009), pp. 863-874

@article{augonnet2009starpu,

   title={StarPU: a unified platform for task scheduling on heterogeneous multicore architectures},

   author={Augonnet, C. and Thibault, S. and Namyst, R. and Wacrenier, P.A.},

   journal={Euro-Par 2009 Parallel Processing},

   pages={863–874},

   year={2009},

   publisher={Springer}

}

In the field of HPC, the current hardware trend is to design multiprocessor architectures that feature heterogeneous technologies such as specialized coprocessors (e.g. Cell/BE SPUs) or data-parallel accelerators (e.g. GPGPUs). Approaching the theoretical performance of these architectures is a complex issue. Indeed, substantial efforts have already been devoted to efficiently offload parts of the computations. However, designing an execution model that unifies all computing units and associated embedded memory remains a main challenge. We have thus designed StarPU, an original runtime system providing a high-level, unified execution model tightly coupled with an expressive data management library. The main goal of StarPU is to provide numerical kernel designers with a convenient way to generate parallel tasks over heterogeneous hardware on the one hand, and easily develop and tune powerful scheduling algorithms on the other hand. We have developed several strategies that can be selected seamlessly at run time, and we have demonstrated their efficiency by analyzing the impact of those scheduling policies on several classical linear algebra algorithms that take advantage of multiple cores and GPUs at the same time. In addition to substantial improvements regarding execution times, we obtained consistent superlinear parallelism by actually exploiting the heterogeneous nature of the machine.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: