Tera-scale Astronomical Data Analysis and Visualization
Centre for Astrophysics and Supercomputing, Swinburne University of Technology, POBox 218, Hawthorn, Australia, 3122
arXiv:1211.4896 [astro-ph.IM] (20 Nov 2012)
@article{2012arXiv1211.4896H,
author={Hassan}, A.~H. and {Fluke}, C.~J. and {Barnes}, D.~G. and {Kilborn}, V.~A.},
title={"{Tera-scale Astronomical Data Analysis and Visualization}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1211.4896},
primaryClass={"astro-ph.IM"},
keywords={Astrophysics – Instrumentation and Methods for Astrophysics, Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Graphics},
year={2012},
month={nov},
adsurl={http://adsabs.harvard.edu/abs/2012arXiv1211.4896H},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7–10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10–100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array pathfinder radiotelescopes.
November 22, 2012 by hgpu