Quantifying the Energy Efficiency of FFT on Heterogeneous Platforms

Yash Ukidave, Amir Kavyan Ziabari, Perhaad Mistry, Gunar Schirner, David Kaeli
Department of Electrical & Computer Engineering, Northeastern University, Boston, USA
International Symposium on Performance Analysis of Systems and Software (ISPASS), 2013


   title={Quantifying the Energy Efficiency of FFT on Heterogeneous Platforms},

   author={Ukidave, Yash and Ziabari, Amir Kavyan and Mistry, Perhaad and Schirner, Gunar and Kaeli, David},



Download Download (PDF)   View View   Source Source   



Heterogeneous computing using Graphic Processing Units (GPUs) has become an attractive computing model given the available scale of data-parallel performance and programming standards such as OpenCL. However, given the energy issues present with GPUs, some devices can exhaust power budgets quickly. Better solutions are needed to effectively exploit the power efficiency available on heterogeneous systems. In this paper we evaluate the power-performance trade-offs of different heterogeneous signal processing applications. More specifically, we compare the performance of 7 different implementations of the Fast Fourier Transform algorithms. Our study covers discrete GPUs and shared memory GPUs (APUs) from AMD (Llano APUs and the Southern Islands GPU), Nvidia (Fermi) and Intel (Ivy Bridge). For this range of platforms, we characterize the different FFTs and identify the specific architectural features that most impact power consumption. Using the 7 FFT kernels, we obtain a 48% reduction in power consumption and up to a 58% improvement in performance across these different FFT implementations. These differences are also found to be target architecture dependent. The results of this study will help the signal processing community identify which class of FFTs are most appropriate for a given platform. More important, we have demonstrated that different algorithms implementing the same fundamental function (FFT) can perform vastly different based on the target hardware and associated programming optimizations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: