High-Performance Neural Networks for Visual Object Classification
IDSIA / USI-SUPSI, Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland
arXiv:1102.0183 [cs.AI] (1 Feb 2011)
@article{2011arXiv1102.0183C,
author={Cire{c s}an}, D.~C. and {Meier}, U. and {Masci}, J. and {Gambardella}, L.~M. and {Schmidhuber}, J.},
title={“{High-Performance Neural Networks for Visual Object Classification}”},
journal={ArXiv e-prints},
archivePrefix={“arXiv”},
eprint={1102.0183},
primaryClass={“cs.AI”},
keywords={Computer Science – Artificial Intelligence, Computer Science – Neural and Evolutionary Computing},
year={2011},
month={feb},
adsurl={http://adsabs.harvard.edu/abs/2011arXiv1102.0183C},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.
February 2, 2011 by hgpu