8670

Implementation of 3D FFTs Across Multiple GPUs in Shared Memory Environments

Nimalan Nandapalan, Jiri Jaros, Alistair P Rendell, Bradley Treeby
Research School of Computer Science, ANU College of Engineering and Computer Science, Australian National University, ACT 0200, Australia
13th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), 2012

@inproceedings{nandapalan2012implementation,

   author={Nimalan Nandapalan and Jiri Jaros and E. Bradley Treeby and P. Alistair Rendell},

   title={Implementation of 3D FFTs Across Multiple GPUs in Shared Memory Environments},

   pages={6},

   booktitle={Proceedings of the Thirteenth International Conference on Parallel and Distributed Computing, Applications and Technologies},

   year={2012},

   location={Beijing},

   language={english},

   url={http://www.fit.vutbr.cz/research/view_pub.php?id=10171}

}

Download Download (PDF)   View View   Source Source   

1183

views

In this paper, a novel implementation of the distributed 3D Fast Fourier Transform (FFT) on a multi-GPU platform using CUDA is presented. The 3D FFT is the core of many simulation methods, thus its fast calculation is critical. The main bottleneck of the distributed 3D FFT is the global data exchange which must be performed. The latest version of CUDA introduces direct GPU-to-GPU transfers using a Unified Virtual Address space (UVA) that provides new possibilities for optimising the communication part of the FFT. Here, we propose different implementations of the distributed 3D FFT, investigate their behaviour, and compare their performance with the single GPU CUFFT and CPU-based FFTW libraries. In particular, we demonstrate the advantage of direct GPU-toGPU transfers over data exchanges via host main memory. Our preliminary results show that running the distributed 3D FFT with four GPUs can bring a 12% speedup over the single node (CUFFT) while also enabling the calculation of 3D FFTs of larger datasets. Replacing the global data exchange via shared memory with direct GPU-to-GPU transfers reduces the execution time by up to 49%. This clearly shows that direct GPU-to-GPU transfers are the key factor in obtaining good performance on multi-GPU systems.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: