3577

A Multi-GPU Spectrometer System for Real-Time Wide Bandwidth Radio Signal Analysis

Hirofumi Kondo, Eric Heien, Masao Okita, Dan Werthimer, Kenichi Hagihara
Graduate School of Information Science and Technology, Osaka University, Japan
International Symposium on Parallel and Distributed Processing with Applications (ISPA), 2010

@conference{kondo2010multi,

   title={A Multi-GPU Spectrometer System for Real-time Wide Bandwidth Radio Signal Analysis},

   author={Kondo, H. and Heien, E. and Okita, M. and Werthimer, D. and Hagihara, K.},

   booktitle={International Symposium on Parallel and Distributed Processing with Applications},

   pages={594–604},

   year={2010},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

530

views

This paper describes the implementation of a large bandwidth multi-GPU signal processing system for radio astronomy observation. This system performs very large Fast Fourier Transform (FFT) and spectrum analysis to achieve real-time analysis of a large bandwidth spectrum. This is accomplished by implementing a four-step FFT algorithm in Compute Unified Device Architecture (CUDA). The key feature of this implementation is that the data size transferred between CPU and GPU is reduced using redundant calculation. We also apply pipeline execution to our system to minimize idle processor time, even with multiple GPUs on a shared bus. Using a single GPU, this system can analyze 1 GB of signal data (128 MHz bandwidth at 1 Hz resolution in single precision floating-point complex format) in 0.44 seconds. With the multi-GPU setup, using four GPUs enables 4 GB of signal data to be processed in 0.82 seconds. This is equivalent to a processing speed of around 60 GFLOPS. In particular, we focus on using this system in the Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP) project. By using multiple GPUs we can get enough practical performance for high bandwidth radio astronomy projects such as SERENDIP.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: