12699

Caffe: Convolutional Architecture for Fast Feature Embedding

Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell
UC Berkeley EECS, Berkeley, CA 94702
arXiv:1408.5093 [cs.CV], (20 Jun 2014)

@article{2014arXiv1408.5093J,

   author={Jia}, Y. and {Shelhamer}, E. and {Donahue}, J. and {Karayev}, S. and {Long}, J. and {Girshick}, R. and {Guadarrama}, S. and {Darrell}, T.},

   title={"{Caffe: Convolutional Architecture for Fast Feature Embedding}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1408.5093},

   primaryClass={"cs.CV"},

   keywords={Computer Science – Computer Vision and Pattern Recognition, Computer Science – Learning, Computer Science – Neural and Evolutionary Computing},

   year={2014},

   month={jun},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1408.5093J},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (~ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: