Places205-VGGNet Models for Scene Recognition

Limin Wang, Sheng Guo, Weilin Huang, Yu Qiao
Shenzhen Institutes of Advanced Technology, CAS, China
arXiv:1508.01667 [cs.CV], (7 Aug 2015)


   title={Places205-VGGNet Models for Scene Recognition},

   author={Wang, Limin and Guo, Sheng and Huang, Weilin and Qiao, Yu},






Download Download (PDF)   View View   Source Source   Source codes Source codes




VGGNets have turned out to be effective for object recognition in still images. However, it is unable to yield good performance by directly adapting the VGGNet models trained on the ImageNet dataset for scene recognition. This report describes our implementation of training the VGGNets on the large-scale Places205 dataset. Specifically, we train three VGGNet models, namely VGGNet-11, VGGNet-13, and VGGNet-16, by using a Multi-GPU extension of Caffe toolbox with high computational efficiency. We verify the performance of trained Places205-VGGNet models on three datasets: MIT67, SUN397, and Places205. Our trained models achieve the state-of-the-art performance on these datasets and are made public available.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: