19062

Automatic Compiler Based FPGA Accelerator for CNN Training

Shreyas Kolala Venkataramanaiah, Yufei Ma, Shihui Yin, Eriko Nurvithadhi, Aravind Dasu, Yu Cao, Jae-sun Seo
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, USA
arXiv:1908.06724 [cs.LG], (15 Aug 2019)

@misc{venkataramanaiah2019automatic,

   title={Automatic Compiler Based FPGA Accelerator for CNN Training},

   author={Shreyas Kolala Venkataramanaiah and Yufei Ma and Shihui Yin and Eriko Nurvithadhi and Aravind Dasu and Yu Cao and Jae-sun Seo},

   year={2019},

   eprint={1908.06724},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

425

views

Training of convolutional neural networks (CNNs)on embedded platforms to support on-device learning is earning vital importance in recent days. Designing flexible training hard-ware is much more challenging than inference hardware, due to design complexity and large computation/memory requirement. In this work, we present an automatic compiler-based FPGA accelerator with 16-bit fixed-point precision for complete CNNtraining, including Forward Pass (FP), Backward Pass (BP) and Weight Update (WU). We implemented an optimized RTL library to perform training-specific tasks and developed an RTL compiler to automatically generate FPGA-synthesizable RTL based on user-defined constraints. We present a new cyclic weight storage/access scheme for on-chip BRAM and off-chip DRAMto efficiently implement non-transpose and transpose operations during FP and BP phases, respectively. Representative CNNs for CIFAR-10 dataset are implemented and trained on Intel Stratix 10-GX FPGA using proposed hardware architecture, demonstrating up to 479 GOPS performance.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: