PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
Lomonosov Moscow State University, Moscow, Russia
arXiv:1504.08362 [cs.CV], (30 Apr 2015)
@article{figurnov2015perforatedcnns,
title={PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions},
author={Figurnov, Michael and Vetrov, Dmitry and Kohli, Pushmeet},
year={2015},
month={apr},
archivePrefix={"arXiv"},
primaryClass={cs.CV}
}
This paper proposes a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Our method is inspired by the loop perforation technique from source code optimization and accelerates the evaluation of bottleneck convolutional layers by exploiting the spatial redundancy of the network. A key element of this approach is the choice of the perforation mask. We propose and analyze different strategies for constructing the perforation mask that can achieve the desired evaluation time with limited loss in accuracy. We demonstrate our approach on modern CNN architectures proposed in the literature and show that our method is able to reduce the evaluation time by 50% with a minimal drop in accuracy.
May 5, 2015 by hgpu