13924

PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions

Michael Figurnov, Dmitry Vetrov, Pushmeet Kohli
Lomonosov Moscow State University, Moscow, Russia
arXiv:1504.08362 [cs.CV], (30 Apr 2015)

@article{figurnov2015perforatedcnns,

   title={PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions},

   author={Figurnov, Michael and Vetrov, Dmitry and Kohli, Pushmeet},

   year={2015},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.CV}

}

Download Download (PDF)   View View   Source Source   

2026

views

This paper proposes a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Our method is inspired by the loop perforation technique from source code optimization and accelerates the evaluation of bottleneck convolutional layers by exploiting the spatial redundancy of the network. A key element of this approach is the choice of the perforation mask. We propose and analyze different strategies for constructing the perforation mask that can achieve the desired evaluation time with limited loss in accuracy. We demonstrate our approach on modern CNN architectures proposed in the literature and show that our method is able to reduce the evaluation time by 50% with a minimal drop in accuracy.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: