Fast Neural Representations for Direct Volume Rendering
Technical University of Munich, Germany
arXiv:2112.01579 [cs.GR], (2 Dec 2021)
@misc{weiss2021fast,
title={Fast Neural Representations for Direct Volume Rendering},
author={Sebastian Weiss and Philipp Hermüller and Rüdiger Westermann},
year={2021},
eprint={2112.01579},
archivePrefix={arXiv},
primaryClass={cs.GR}
}
Despite the potential of neural scene representations to effectively compress 3D scalar fields at high reconstruction quality, the computational complexity of the training and data reconstruction step using scene representation networks limits their use in practical applications. In this paper, we analyze whether scene representation networks can be modified to reduce these limitations and whether these architectures can also be used for temporal reconstruction tasks. We propose a novel design of scene representation networks using GPU tensor cores to integrate the reconstruction seamlessly into on-chip raytracing kernels. Furthermore, we investigate the use of image-guided network training as an alternative to classical data-driven approaches, and we explore the potential strengths and weaknesses of this alternative regarding quality and speed. As an alternative to spatial super-resolution approaches for time-varying fields, we propose a solution that builds upon latent-space interpolation to enable random access reconstruction at arbitrary granularity. We summarize our findings in the form of an assessment of the strengths and limitations of scene representation networks for scientific visualization tasks and outline promising future research directions in this field.
December 12, 2021 by hgpu