Fast Algorithms for Convolutional Neural Networks
arXiv:1509.09308 [cs.NE], (30 Sep 2015)
@article{lavin2015fast,
title={Fast Algorithms for Convolutional Neural Networks},
author={Lavin, Andrew},
year={2015},
month={sep},
archivePrefix={"arXiv"},
primaryClass={cs.NE}
}
We derive a new class of fast algorithms for convolutional neural networks using Winograd’s minimal filtering algorithms. Specifically we derive algorithms for network layers with 3×3 kernels, which are the preferred kernel size for image recognition tasks. The best of our algorithms reduces arithmetic complexity up to 4X compared with direct convolution, while using small block sizes with limited transform overhead and high computational intensity. By comparison, FFT based convolution requires larger block sizes and significantly greater transform overhead to achieve an equal complexity reduction. We measure the accuracy of our algorithms to be sufficient for deep learning and inference with fp32 or fp16 data. Also, we demonstrate the practical application of our approach with a simple CPU implementation of our slowest algorithm using the Intel Math Kernel Library, and report VGG network inference results that are 2.6X as fast as Caffe with an effective utilization of 109%. We believe these are the highest utilization convnet inference results to date, and that they can be improved significantly with more implementation effort. We also believe that the new algorithms lend themselves equally well to GPU and FPGA implementations for both training and inference.
October 3, 2015 by hgpu