1659

On the energy efficiency of graphics processing units for scientific computing

S. Huang, S. Xiao, W. Feng
Department of Computer Science, Virginia Tech
Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on In Fifth Workshop on High-Performance, Power-Aware Computing (HPPAC’09) (2009), pp. 1-8

@conference{huang2009energy,

   title={On the energy efficiency of graphics processing units for scientific computing},

   author={Huang, S. and Xiao, S. and Feng, W.},

   booktitle={Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on},

   pages={1–8},

   issn={1530-2075},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1483

views

The graphics processing unit (GPU) has emerged as a computational accelerator that dramatically reduces the time to discovery in high-end computing (HEC). However, while today’s state-of-the-art GPU can easily reduce the execution time of a parallel code by many orders of magnitude, it arguably comes at the expense of significant power and energy consumption. For example, the NVIDIA GTX 280 video card is rated at 236 watts, which is as much as the rest of a compute node, thus requiring a 500-W power supply. As a consequence, the GPU has been viewed as a ldquonon-greenrdquo computing solution. This paper seeks to characterize, and perhaps debunk, the notion of a ldquopower-hungry GPUrdquo via an empirical study of the performance, power, and energy characteristics of GPUs for scientific computing. Specifically, we take an important biological code that runs in a traditional CPU environment and transform and map it to a hybrid CPU+GPU environment. The end result is that our hybrid CPU+GPU environment, hereafter referred to simply as GPU environment, delivers an energy-delay product that is multiple orders of magnitude better than a traditional CPU environment, whether unicore or multicore.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: