Isosurface Extraction and View-Dependent Filtering from Time-Varying Fields Using Persistent Time-Octree (PTOT)
CSE Dept., Polytechnic Institute of New York University, Brooklyn, NY, USA
IEEE Transactions on Visualization and Computer Graphics, 2009, Volume 15, Issue 6, p.1367-1374,
@article{wang2009isosurface,
title={Isosurface Extraction and View-Dependent Filtering from Time-Varying Fields Using Persistent Time-Octree (PTOT)},
author={Wang, C. and Chiang, Y.J.},
journal={Visualization and Computer Graphics, IEEE Transactions on},
volume={15},
number={6},
pages={1367–1374},
issn={1077-2626},
year={2009},
publisher={IEEE}
}
We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel persistent time-octree (PTOT) indexing structure. Previously, the persistent octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the branch-on-need octree (BONO, for view-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel persistent time-octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same iso value q over m consecutive time steps, there is no additional searching overhead (except for reporting the additional active cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPU for time-varying fields larger than main memory. Our experiments on datasets as large as 192 GB (with 4 GB per time step) having no more than 870 MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique.
January 31, 2011 by hgpu