On Vectorization of Deep Convolutional Neural Networks for Vision Tasks
Lenovo Research & Technology
arXiv:1501.07338 [cs.CV], (29 Jan 2015)
@article{ren2015vectorization,
title={On Vectorization of Deep Convolutional Neural Networks for Vision Tasks},
author={Ren, Jimmy SJ. and Xu, Li},
year={2015},
month={jan},
archivePrefix={"arXiv"},
primaryClass={cs.CV}
}
We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.
January 30, 2015 by hgpu