18852

A Comparative Study of Asynchronous Many-Tasking Runtimes: Cilk, Charm++, ParalleX and AM++

Abhishek Kulkarni, Andrew Lumsdaine
Center for Research in Extreme Scale Technologies, Indiana University
arXiv:1904.00518 [cs.DC], (1 Apr 2019)

@misc{kulkarni2019comparative,

   title={A Comparative Study of Asynchronous Many-Tasking Runtimes: Cilk, Charm++, ParalleX and AM++},

   author={Abhishek Kulkarni and Andrew Lumsdaine},

   year={2019},

   eprint={1904.00518},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1957

views

We evaluate and compare four contemporary and emerging runtimes for high-performance computing(HPC) applications: Cilk, Charm++, ParalleX and AM++. We compare along three bases: programming model, execution model and the implementation on an underlying machine model. The comparison study includes a survey of each runtime system’s programming models, their corresponding execution models, their stated features, and performance and productivity goals. We first qualitatively compare these runtimes with programmability in mind. The differences in expressivity and programmability arising from their syntax and semantics are clearly enunciated through examples common to all runtimes. Then, the execution model of each runtime, which acts as a bridge between the programming model and the underlying machine model, is compared and contrasted to that of the others. We also evaluate four mature implementations of these runtimes, namely: Intel Cilk++, Charm++ 6.5.1, AM++ and HPX, that embody the principles dictated by these models. With the emergence of the next generation of supercomputers, it is imperative for parallel programming models to evolve and address the integral challenges introduced by the increasing scale. Rather than picking a winner out of these four models under consideration, we end with a discussion on lessons learned, and how such a study is instructive in the evolution of parallel programming frameworks to address the said challenges.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: