2343

Message passing on data-parallel architectures

Jeff A. Stuart, John D. Owens
Department of Computer Science, University of California, Davis
Parallel and Distributed Processing Symposium, International In IEEE International Symposium on Parallel & Distributed Processing (IPDPS’09), Vol. 0 (2009), pp. 1-12

@conference{stuart2009message,

   title={Message passing on data-parallel architectures},

   author={Stuart, J.A. and Owens, J.D.},

   booktitle={Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on},

   pages={1–12},

   issn={1530-2075},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

519

views

This paper explores the challenges in implementing a message passing interface usable on systems with data-parallel processors. As a case study, we design and implement the “DCGN” API on NVIDIA GPUs that is similar to MPI and allows full access to the underlying architecture. We introduce the notion of data-parallel thread-groups as a way to map resources to MPI ranks. We use a method that also allows the data-parallel processors to run autonomously from user-written CPU code. In order to facilitate communication, we use a sleep-based polling system to store and retrieve messages. Unlike previous systems, our method provides both performance and flexibility. By running a test suite of applications with different communication requirements, we find that a tolerable amount of overhead is incurred, somewhere between one and five percent depending on the application, and indicate the locations where this overhead accumulates. We conclude that with innovations in chipsets and drivers, this overhead will be mitigated and provide similar performance to typical CPU-based MPI implementations while providing fully-dynamic communication.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: