18380

LeFlow: Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks

Daniel H. Noronha, Bahar Salehpour, Steven J.E. Wilton
Programmable Solutions Group, Intel
arXiv:1807.05317 [cs.LG], (14 Jul 2018)

@article{noronha2018leflow,

   title={LeFlow: Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks},

   author={Noronha, Daniel H. and Salehpour, Bahar and Wilton, Steven J.E.},

   year={2018},

   month={jul},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

Recent work has shown that Field-Programmable Gate Arrays (FPGAs) play an important role in the acceleration of Machine Learning applications. Initial specification of machine learning applications are often done using a high-level Python-oriented framework such as Tensorflow, followed by a manual translation to either C or RTL for synthesis using vendor tools. This manual translation step is time-consuming and requires expertise that limit the applicability of FPGAs in this important domain. In this paper, we present an open-source tool-flow that maps numerical computation models written in Tensorflow to synthesizable hardware. Unlike other tools, which are often constrained by a small number of inflexible templates, our flow uses Google’s XLA compiler which emits LLVM code directly from a Tensorflow specification. This LLVM code can then be used with a high-level synthesis tool to automatically generate hardware. We show that our flow allows users to generate Deep Neural Networks with very few lines of Python code.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: