A Distributed GPU-based Framework for real-time 3D Volume Rendering of Large Astronomical Data Cubes

A.H. Hassan, C.J. Fluke, D.G. Barnes
Centre for Astrophysics & Supercomputing, Swinburne University of Technology, Hawthorn, Victoria, Australia
arXiv:1205.0282v1 [astro-ph.IM] (1 May 2012)


   author={Hassan}, A.~H. and {Fluke}, C.~J. and {Barnes}, D.~G.},

   title={"{A Distributed GPU-based Framework for real-time 3D Volume Rendering of Large Astronomical Data Cubes}"},

   journal={ArXiv e-prints},




   keywords={Astrophysics – Instrumentation and Methods for Astrophysics, Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Graphics},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   



We present a framework to interactively volume-render three-dimensional data cubes using distributed ray-casting and volume bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core CPU. The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128 GPUs. The framework proved to be scalable to render a 204 GB data cube with an average of 30 frames per second. Our performance analyses also compare between using NVIDIA Tesla 1060 and 2050 GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, and the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order 3D data sets is a requirement.
Rating: 0.5/5. From 1 vote.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: