1592

A dynamically configurable coprocessor for convolutional neural networks

Srimat Chakradhar, Murugan Sankaradas, Venkata Jakkula, Srihari Cadambi
NEC Laboratories America, Inc., Princeton, NJ, USA
In ISCA ’10: Proceedings of the 37th annual international symposium on Computer architecture (2010), pp. 247-257

@conference{chakradhar2010dynamically,

   title={A dynamically configurable coprocessor for convolutional neural networks},

   author={Chakradhar, S. and Sankaradas, M. and Jakkula, V. and Cadambi, S.},

   booktitle={Proceedings of the 37th annual international symposium on Computer architecture},

   pages={247–257},

   year={2010},

   organization={ACM}

}

Source Source   

2769

views

Convolutional neural networks (CNN) applications range from recognition and reasoning (such as handwriting recognition, facial expression recognition and video surveillance) to intelligent text applications such as semantic text analysis and natural language processing applications. Two key observations drive the design of a new architecture for CNN. First, CNN workloads exhibit a widely varying mix of three types of parallelism : parallelism within a convolution operation, intra-output parallelism where multiple input sources (features) are combined to create a single output, and inter-output parallelism where multiple, independent outputs (features) are computed simultaneously. Workloads differ significantly across different CNN applications, and across different layers of a CNN. Second, the number of processing elements in an architecture continues to scale (as per Moore’s law) much faster than off-chip memory bandwidth (or pin-count) of chips. Based on these two observations, we show that for a given number of processing elements and off-chip memory bandwidth, a new CNN hardware architecture that dynamically configures the hardware on-the-fly to match the specific mix of parallelism in a given workload gives the best throughput performance. Our CNN compiler automatically translates high abstraction network specification into a parallel microprogram (a sequence of low-level VLIW instructions) that is mapped, scheduled and executed by the coprocessor. Compared to a 2.3 GHz quad-core, dual socket Intel Xeon, 1.35 GHz C870 GPU, and a 200 MHz FPGA implementation, our 120 MHz dynamically configurable architecture is 4x to 8x faster. This is the first CNN architecture to achieve real-time video stream processing (25 to 30 frames per second) on a wide range of object detection and recognition tasks.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: