cusFFT: A High-Performance Sparse Fast Fourier Transform Algorithm on GPUs

Cheng Wang, Sunita Chandrasekaran, Barbara Chapman
Department of Computer Science, University of Houston, Houston, TX, USA
30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2016


   author={Wang, Cheng and Chandrasekaran, Sunita and Chapman, Barbara},

   title={cusFFT: A High-Performance Sparse Fast Fourier Transform Algorithm on GPUs},



Download Download (PDF)   View View   Source Source   



The Fast Fourier Transform (FFT) is one of the most important numerical tools widely used in many scientific and engineering applications. The algorithm performs O(nlogn) operations on n input data points in order to calculate only small number of k large coefficients, while the rest of n − k numbers are zero or negligibly small. The algorithm is clearly inefficient, when n points input data lead to only k ≪ n non-zero coefficients in the transformed domain. MIT in 2012 developed a sparse FFT (sFFT) algorithm that provides a solution to this problem. In this paper, we explore the challenges and propose effective solutions to efficiently port sFFT to massively parallel processors, such as GPUs, using CUDA. GPGPUs are being increasingly adopted as popular HPC platforms because of their tremendous computing power and remarkable cost efficiency. However, sFFT algorithm is a complex and computationally challenging memory-bound algorithm that is not straightforward to be implemented on GPUs. In this paper, we present some of the optimization strategies such as index coalescing, loop splitting, asynchronous data layout transformation, linear time selection algorithm that are required to compute sFFT on such massively parallel architectures. Our CUDA-based sFFT, cusFFT, performs over 10x faster than the state-of-the-art cuFFT library on GPUs and over 28x faster than the parallel FFTW on multicore CPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: