4866

Small Discrete Fourier Transforms on GPUs

S. Mitra, A. Srinivasan
Dept. of Computer Science, Florida State University, Tallahassee, FL 32306, USA
11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2011

@inproceedings{mitra2011small,

   title={Small Discrete Fourier Transforms on GPUs},

   author={Mitra, S. and Srinivasan, A.},

   booktitle={Cluster, Cloud and Grid Computing (CCGrid), 2011 11th IEEE/ACM International Symposium on},

   pages={33–42},

   year={2011},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

735

views

Efficient implementations of the Discrete Fourier Transform (DFT) for GPUs provide good performance with large data sizes, but are not competitive with CPU code for small data sizes. On the other hand, several applications perform multiple DFTs on small data sizes. In fact, even algorithms for large data sizes use a divide-and-conquer approach, where eventually small DFTs need to be performed. We discuss our DFT implementation, which is efficient for multiple small DFTs. One feature of our implementation is the use of the asymptotically slow matrix multiplication approach for small data sizes, which improves performance on the GPU due to its regular memory access and computational patterns. We combine this algorithm with the mixed radix algorithm for 1-D, 2-D, and 3-D complex DFTs. We also demonstrate the effect of different optimization techniques. When GPUs are used to accelerate a component of an application running on the host, it is important that decisions taken to optimize the GPU performance not affect the performance of the rest of the application on the host. One feature of our implementation is that we use a data layout that is not optimal for the GPU so that the overall effect on the application is better. Our implementation performs up to two orders of magnitude faster than cuFFT on an NVIDIA GeForce 9800 GTX GPU and up to one to two orders of magnitude faster than FFTW on a CPU for multiple small DFTs. Furthermore, we show that our implementation can accelerate the performance of a Quantum Monte Carlo application for which cuFFT is not effective. The primary contributions of this work lie in demonstrating the utility of the matrix multiplication approach and also in providing an implementation that is efficient for small DFTs when a GPU is used to accelerate an application running on the host.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: