7575

Mapping a Data-Flow Programming Model onto Heterogeneous Platforms

Alina Sbirlea, Yi Zou, Zoran Budimlic, Jason Cong, Vivek Sarkar
Rice University
Conference on Languages, Compilers, Tools and Theory for Embedded Systems (LCTES), 2012

@article{sbirlea2012mapping,

   title={Mapping a Data-Flow Programming Model onto Heterogeneous Platforms},

   author={Sbirlea, Alina and Zou, Yi and Budimlic, Zoran and Cong, Jason and Sarkar, Vivek},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

1678

views

In this paper we explore mapping of a high-level macro data-flow programming model called Concurrent Collections (CnC) onto heterogeneous platforms in order to achieve high performance and low energy consumption while preserving the ease of use of data-flow programming. Modern computing platforms are becoming increasingly heterogeneous in order to improve energy efficiency. This trend is clearly seen across a diverse spectrum of platforms, from small-scale embedded SOCs to large-scale super-computers. However, programming these heterogeneous platforms poses a serious challenge for application developers. We have designed a software flow for converting high-level CnC programs to the Habanero-C language. CnC programs have a clear separation between the application description, the implementation of each of the application components and the abstraction of hardware platform, making it an excellent programming model for domain experts. Domain experts can later employ the help of a tuning expert (either a compiler or a person) to tune their applications with minimal effort. We also extend the Habanero-C runtime system to support work-stealing across heterogeneous computing devices and introduce task affinity for these heterogeneous components to allow users to fine tune the runtime scheduling decisions. We demonstrate a working example that maps a pipeline of medical image-processing algorithms onto a prototype heterogeneous platform that includes CPUs, GPUs and FPGAs. For the medical imaging domain, where obtaining fast and accurate results is a critical step in diagnosis and treatment of patients, we show that our model offers up to 17.72x speedup and an estimated usage of 0.52x of the power used by CPUs alone, when using accelerators (GPUs and FPGAs) and CPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: