3212

A framework for efficient and scalable execution of domain-specific templates on GPUs

Narayanan Sundaram, Anand Raghunathan, S. T. Chakradhar
NEC Laboratories America, Princeton, NJ, USA
Parallel Distributed Processing 2009 IPDPS 2009 IEEE International Symposium on (2009) Publisher: IEEE, Pages: 1-12

@article{sundaram2009framework,

   title={A framework for efficient and scalable execution of domain-specific templates on GPUs},

   author={Sundaram, N. and Raghunathan, A. and Chakradhar, S.T.},

   year={2009},

   publisher={IEEE}

}

Download Download (PDF)   View View   Source Source   

1163

views

Graphics processing units (GPUs) have emerged as important players in the transition of the computing industry from sequential to multi- and many-core computing. We propose a software framework for execution of domain-specific parallel templates on GPUs, which simultaneously raises the abstraction level of GPU programming and ensures efficient execution with forward scalability to large data sizes and new GPU platforms. To achieve scalable and efficient GPU execution, our framework focuses on two critical problems that have been largely ignored in previous efforts-processing large data sets that do not fit within the GPU memory, and minimizing data transfers between the host and GPU. Our framework takes domain-specific parallel programming templates that are expressed as parallel operator graphs, and performs operator splitting, of-fload unit identification, and scheduling of off-loaded computations and data transfers between the host and the GPU, to generate a highly optimized execution plan. Finally, a code generator produces a hybrid CPU/GPU program in accordance with the derived execution plan, that uses lower-level frameworks such as CUDA. We have applied the proposed framework to templates from the recognition domain, specifically edge detection kernels and convolutional neural networks that are commonly used in image and video analysis. We present results on two different GPU platforms from NVIDIA (a Tesla C870 GPU computing card and a GeForce 8800 graphics card) that demonstrate 1.7-7.8X performance improvements over already accelerated baseline GPU implementations. We also demonstrate scalability to input data sets and application memory footprints of 6 GB and 17 GB, respectively, on GPU platforms with only 768 MB and 1.5 GB of memory.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: