StarPU: a Runtime System for Scheduling Tasks over Accelerator-Based Multicore Machines
Laboratoire Bordelais de Recherche en Informatique (LaBRI), CNRS : UMR5800 – Universite Sciences et Technologies – Bordeaux I – Ecole Nationale Superieure d’Electronique, Informatique et Radiocommunications de Bordeaux – Universite Victor Segalen – Bordeaux II
INRIA, Research Report, inria-00467677
@article{augonnet2010starpu,
title={StarPU: a Runtime System for Scheduling Tasks over Accelerator-Based Multicore Machines},
author={Augonnet, C. and Thibault, S. and Namyst, R.},
year={2010}
}
Multicore machines equipped with accelerators are becoming increasingly popular. The TOP500-leading RoadRunner machine is probably the most famous example of a parallel computer mixing IBM Cell Broadband Engines and AMD opteron processors. Other architectures, featuring GPU accelerators, are expected to appear in the near future. To fully tap into the potential of these hybrid machines, pure offloading approaches, in which the main core of the application runs on regular processors and offloads specific parts on accelerators, are not sufficient. The real challenge is to build systems where the application would permanently spread across the entire machine, that is, where parallel tasks would be dynamically scheduled over the full set of available processing units. To face this challenge, we propose a new runtime system capable of scheduling tasks over heterogeneous, accelerator-based machines. Our system features a software virtual shared memory that provides a weak consistency model. The system keeps track of data copies within accelerator embedded-memories and features a data-prefetching engine. Such facilities, together with a database of self-tuned per-task performance models, can be used to greatly improve the quality of scheduling policies in this context. We demonstrate the relevance of our approach by benchmarking various parallel numerical kernel implementations over our runtime system. We obtain significant speedups and a very high efficiency on various typical workloads over multicore machines equipped with multiple accelerators.
February 26, 2011 by hgpu