Action-Based Multifield Video Visualization

Ralf P. Botchen, Sven Bachthaler, Fabian Schick, Min Chen, Greg Mori, Daniel Weiskopf, Thomas Ertl
Institute for Visualization and Interactive Systems, Universitat Stuttgart, Germany
Visualization and Computer Graphics, IEEE Transactions on In Visualization and Computer Graphics, IEEE Transactions on, Vol. 14, No. 4. (2008), pp. 885-899


   title={Action-based multifield video visualization},

   author={Botchen, R.P. and Bachthaler, S. and Schick, F. and Chen, M. and Mori, G. and Weiskopf, D. and Ertl, T.},

   journal={IEEE Transactions on Visualization and Computer Graphics},




   publisher={Published by the IEEE Computer Society}


Download Download (PDF)   View View   Source Source   



One challenge in video processing is to detect actions and events, known or unknown, in video streams dynamically. This paper proposes a visualization solution, where a video stream is depicted as a series of snapshots at a relatively sparse interval, and detected actions are highlighted with continuous abstract illustrations. The combined imagery and illustrative visualization conveys multi-field information in a manner similar to electrocardiograms (ECG) and seismographs. We thus name this type of video visualization as VideoPerpetuoGram (VPG). In this paper, we describe a system that handles the aw and processed information of the video stream in a multi-field visualization pipeline. As examples, we consider the needs for highlighting several types of processed information, including detected actions in video streams, and estimated relationship between recognized objects. We examine the effective means for depicting multi-field information in VPG, and support our choice of visual mappings through a survey. Our GPU implementation facilitates the VPG-specific viewing specification through a sheared object space, as well as volume bricking and combinational rendering of volume data and glyphs.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: