24609

cuFINUFFT: a load-balanced GPU library for general-purpose nonuniform FFTs

Yu-hsuan Shih, Garrett Wright, Joakim Andén, Johannes Blaschke, Alex H. Barnett
Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
arXiv:2102.08463 [cs.DC], (16 Feb 2021)

@misc{shih2021cufinufft,

   title={cuFINUFFT: a load-balanced GPU library for general-purpose nonuniform FFTs},

   author={Yu-hsuan Shih and Garrett Wright and Joakim Andén and Johannes Blaschke and Alex H. Barnett},

   year={2021},

   eprint={2102.08463},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Nonuniform fast Fourier transforms dominate the computational cost in many applications including image reconstruction and signal processing. We thus present a general-purpose GPU-based CUDA library for type 1 (nonuniform to uniform) and type 2 (uniform to nonuniform) transforms in dimensions 2 and 3, in single or double precision. It achieves high performance for a given user-requested accuracy, regardless of the distribution of nonuniform points, via cache-aware point reordering, and load-balanced blocked spreading in shared memory. At low accuracies, this gives on-GPU throughputs around 10^9 nonuniform points per second, and (even including host-device transfer) is typically 4-10x faster than the latest parallel CPU code FINUFFT (at 28 threads). It is competitive with two established GPU codes, being up to 90x faster at high accuracy and/or type 1 clustered point distributions. Finally we demonstrate a 6-18x speedup versus CPU in an X-ray diffraction 3D iterative reconstruction task at 10^−12 accuracy, observing excellent multi-GPU weak scaling up to one rank per GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: