A Unified Framework for Multi-Sensor HDR Video Reconstruction
C-Research, Department of Media and Information Technology, Linkoping University
arXiv:1308.4908 [cs.CV], (22 Aug 2013)
@article{2013arXiv1308.4908K,
author={Kronander}, J. and {Gustavson}, S. and {Bonnet}, G. and {Ynnerman}, A. and {Unger}, J.},
title={"{A Unified Framework for Multi-Sensor HDR Video Reconstruction}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1308.4908},
primaryClass={"cs.CV"},
keywords={Computer Science – Computer Vision and Pattern Recognition, Computer Science – Graphics, Computer Science – Multimedia},
year={2013},
month={aug},
adsurl={http://adsabs.harvard.edu/abs/2013arXiv1308.4908K},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.
August 23, 2013 by hgpu