2442

An events based algorithm for distributing concurrent tasks on multi-core architectures

David W. Holmes, John R. Williams, Peter Tilke
Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307, United States
Computer Physics Communications, Volume 181, Issue 2, February 2010, Pages 341-354

@article{holmes2010events,

   title={An events based algorithm for distributing concurrent tasks on multi-core architectures},

   author={Holmes, D.W. and Williams, J.R. and Tilke, P.},

   journal={Computer Physics Communications},

   volume={181},

   number={2},

   pages={341–354},

   issn={0010-4655},

   year={2010},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   

520

views

In this paper, a programming model is presented which enables scalable parallel performance on multi-core shared memory architectures. The model has been developed for application to a wide range of numerical simulation problems. Such problems involve time stepping or iteration algorithms where synchronization of multiple threads of execution is required. It is shown that traditional approaches to parallelism including message passing and scatter-gather can be improved upon in terms of speed-up and memory management. Using spatial decomposition to create orthogonal computational tasks, a new task management algorithm called H-Dispatch is developed. This algorithm makes efficient use of memory resources by limiting the need for garbage collection and takes optimal advantage of multiple cores by employing a “hungry” pull strategy. The technique is demonstrated on a simple finite difference solver and results are compared to traditional MPI and scatter-gather approaches. The H-Dispatch approach achieves near linear speed-up with results for efficiency of 85% on a 24-core machine. It is noted that the H-Dispatch algorithm is quite general and can be applied to a wide class of computational tasks on heterogeneous architectures involving multi-core and GPGPU hardware.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: