Exploring the multiple-GPU design space

Dana Schaa, David Kaeli
Department of Electrical and Computer Engineering, Northeastern University, USA
Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on In IPDPS ’09: Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing (May 2009), pp. 1-12.


   title={Exploring the multiple-GPU design space},

   author={Schaa, D. and Kaeli, D.},

   booktitle={Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on},






Download Download (PDF)   View View   Source Source   



Graphics Processing Units (GPUs) have been growing in popularity due to their impressive processing capabilities, and with general purpose programming languages such as NVIDIA’s CUDA interface, are becoming the platform of choice in the scientific computing community. Previous studies that used GPUs focused on obtaining significant performance gains from execution on a single GPU. These studies employed low-level, architecture-specific tuning in order to achieve sizeable benefits over multicore CPU execution. In this paper, we consider the benefits of running on multiple (parallel) GPUs to provide further orders of performance speedup. Our methodology allows developers to accurately predict execution time for GPU applications while varying the number and configuration of the GPUs, and the size of the input data set. This is a natural next step in GPU computing because it allows researchers to determine the most appropriate GPU configuration for an application without having to purchase hardware, or write the code for a multiple-GPU implementation. When used to predict performance on six scientific applications, our framework produces accurate performance estimates (11% difference on average and 40% maximum difference in a single case) for a range of short and long running scientific programs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: