8986

Automatic Mapping of Stream Programs on Multicore Architectures

Pablo de Oliveira Castro, Stephane Louise, Denis Barthou
CEA, LIST
International Workshop on Compilers for Parallel Computers, 2010

@inproceedings{deoliveiracastro:hal-00551680,

   hal_id={hal-00551680},

   url={http://hal.archives-ouvertes.fr/hal-00551680},

   title={Automatic Mapping of Stream Programs on Multicore Architectures},

   author={De Oliveira Castro, Pablo and Louise, St{‘e}phane and Barthou, Denis},

   language={Anglais},

   affiliation={Laboratoire d’Int{‘e}gration des Syst{‘e}mes et des Technologies – CEA LIST , Laboratoire Bordelais de Recherche en Informatique – LaBRI , RUNTIME – INRIA Bordeaux – Sud-Ouest},

   booktitle={International Workshop on Compilers for Parallel Computers},

   address={Vienna, Autriche},

   audience={non sp{‘e}cifi{‘e}e },

   year={2010},

   month={Jul},

   pdf={http://hal.archives-ouvertes.fr/hal-00551680/PDF/cpc10.pdf}

}

Download Download (PDF)   View View   Source Source   

1669

views

Stream languages explicitly describe fork-join and pipeline parallelism, offering a powerful programming model for general multicore systems. This parallelism description can be exploited on hybrid architectures, eg. composed of Graphics Processing Units (GPUs) and general purpose multicore processors. In this paper, we present a novel approach to optimize stream programs for hybrid architectures composed of GPU and multicore CPUs. The approach focuses on memory and communication performance bottlenecks for this kind of architecture. The initial task graph of the stream program is first transformed so as to reduce fork-join synchronization costs. The transformation is obtained through the application of a sequence of some optimizing elementary stream restructurations enabling communication efficient mappings. Then tasks are scheduled in a software pipeline and coarsened with a coarsening level adapted to their placement (CPU of GPU). Our experiments show the importance of both the synchronization cost reduction and of the coarsening step on performance, adapting the grain of parallelism to the CPUs and to the GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: