AccFFT: A library for distributed-memory FFT on CPU and GPU architectures

Amir Gholami, Judith Hill, Dhairya Malhotra, George Biros
Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX 78712, USA
arXiv:1506.07933 [cs.DC], (26 Jun 2015)

   title={AccFFT: A library for distributed-memory FFT on CPU and GPU architectures},

   author={Gholami, Amir and Hill, Judith and Malhotra, Dhairya and Biros, George},






We present a new library for parallel distributed Fast Fourier Transforms (FFT). Despite the large amount of work on FFTs, we show that significant speedups can be achieved for distributed transforms. The importance of FFT in science and engineering and the advances in high performance computing necessitate further improvements. AccFFT extends existing FFT libraries for x86 architectures (CPUs) and CUDA-enabled Graphics Processing Units (GPUs) to distributed memory clusters using the Message Passing Interface (MPI). Our library uses specifically optimized all-to-all communication algorithms, to efficiently perform the communication phase of the distributed FFT algorithm. The GPU based algorithm, effectively hides the overhead of PCIe transfers. We present numerical results on the Maverick and Stampede platforms at the Texas Advanced Computing Center (TACC) and on the Titan system at the Oak Ridge National Laboratory (ORNL). We compare the CPU version of AccFFT with P3DFFT and PFFT libraries and we show a consistent 2-3x speedup across a range of processor counts and problem sizes. The comparison of the GPU code with FFTE library shows a similar trend with a 2x speedup. The library is tested up to 131K cores and 4,096 GPUs of Titan, and up to 16K cores of Stampede.
VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)
AccFFT: A library for distributed-memory FFT on CPU and GPU architectures, 5.0 out of 5 based on 2 ratings

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477192369
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477192369
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => IoiTgTrysfKatNHFdLFNJuR0jeg=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2033 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: