1417

Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope Array with GPUs

R. G. Edgar, M. A. Clark, K. Dale, D. A. Mitchell, S. M. Ord, R. B. Wayth, H. Pfister, L. J. Greenhill
Initiative in Innovative Computing, 29 Oxford Street, Cambridge MA 02138
Computer Physics Communications, Volume 181, Issue 10, p. 1707-1714, arXiv:1003.5575 [astro-ph.IM] (29 Mar 2010)

@article{edgar2010enabling,

   title={Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs},

   author={Edgar, RG and Clark, MA and Dale, K. and Mitchell, DA and Ord, SM and Wayth, RB and Pfister, H. and Greenhill, LJ},

   journal={Computer Physics Communications},

   issn={0010-4655},

   year={2010},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   

1768

views

The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5GiB/s, grouped into 8s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: