Task Parallelism and Synchronization: An Overview of Explicit Parallel Programming Languages
CRI, Mathematiques et systemes, MINES ParisTech, 35 rue Saint-Honore, 77300 Fontainebleau, France
Technical Report CRI/A-486, MINES ParisTech, 2012
@techreport{khaldi2012task,
title={Task Parallelism and Synchronization: An Overview of Explicit Parallel Programming Languages},
author={Khaldi, D. and Jouvelot, P. and Ancourt, C. and Irigoin, F.},
year={2012},
institution={Technical Report CRI/A-486, MINES ParisTech}
}
Programming parallel machines as effectively as sequential ones would ideally require a language that provides high-level programming constructs in order to avoid the programming errors frequent when expressing parallelism. Since task parallelism is often considered more error-prone than data parallelism, we survey six popular and efficient parallel programming languages that tackle this difficult issue: Cilk, Chapel, X10, Habanero-Java, OpenMP and OpenCL. Using as single running example a parallel implementation of the computation of the Mandelbrot set, this paper describes how the fundamentals of task parallel programming, namely collective and point-to-point synchronization and mutual exclusion, are dealt with in these languages. Our study suggests that, even though there is a wealth of various names and notions introduced by these languages, they all boil down to three key task concepts: creation, synchronization and atomicity. The paper is designed to give users and language and compiler designers an overview of current parallel languages.
February 11, 2012 by hgpu