17135

Parallel Multi Channel Convolution using General Matrix Multiplication

Aravind Vasudevan, Andrew Anderson, David Gregg
School of Computer Science and Statistics, Trinity College Dublin
arXiv:1704.04428 [cs.CV], (6 Apr 2017)

@article{vasudevan2017parallel,

   title={Parallel Multi Channel Convolution using General Matrix Multiplication},

   author={Vasudevan, Aravind and Anderson, Andrew and Gregg, David},

   year={2017},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.CV}

}

Download Download (PDF)   View View   Source Source   

2392

views

Convolutional neural networks (CNNs) have emerged as one of the most successful machine learning technologies for image and video processing. The most computationally intensive parts of CNNs are the convolutional layers, which convolve multi-channel images with multiple kernels. A common approach to implementing convolutional layers is to expand the image into a column matrix (im2col) and perform Multiple Channel Multiple Kernel (MCMK) convolution using an existing parallel General Matrix Multiplication (GEMM) library. This im2col conversion greatly increases the memory footprint of the input matrix and reduces data locality. In this paper we propose a new approach to MCMK convolution that is based on General Matrix Multiplication (GEMM), but not on im2col. Our algorithm eliminates the need for data replication on the input. By splitting a single call to GEMM into several smaller calls, we can eliminate date size increases on either the input or output of the convolution layer. We have implemented several variants of our algorithm on CPU and GPU processors. On CPU, our algorithm uses much less memory than im2col and in most cases is also faster.
Rating: 1.8/5. From 6 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: