11136

Fast Training of Convolutional Networks through FFTs

Michael Mathieu, Mikael Henaff, Yann LeCun
Courant Institute of Mathematical Sciences, New York University
arXiv:1312.5851 [cs.CV], (20 Dec 2013)

@article{2013arXiv1312.5851M,

   author={Mathieu}, M. and {Henaff}, M. and {LeCun}, Y.},

   title={"{Fast Training of Convolutional Networks through FFTs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1312.5851},

   primaryClass={"cs.CV"},

   keywords={Computer Science – Computer Vision and Pattern Recognition, Computer Science – Learning, Computer Science – Neural and Evolutionary Computing},

   year={2013},

   month={dec},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1312.5851M},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1070

views

Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: