Layered Interpretation of Street View Images
Mitsubishi Electric Research Labs (MERL), Cambridge, Massachusetts, USA
arXiv:1506.04723 [cs.CV], (15 Jun 2015)
@article{liu2015layered,
title={Layered Interpretation of Street View Images},
author={Liu, Ming-Yu and Lin, Shuoxin and Ramalingam, Srikumar and Tuzel, Oncel},
year={2015},
month={jun},
archivePrefix={"arXiv"},
primaryClass={cs.CV}
}
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.
June 17, 2015 by hgpu