LeFlow: Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks
Programmable Solutions Group, Intel
arXiv:1807.05317 [cs.LG], (14 Jul 2018)
@article{noronha2018leflow,
title={LeFlow: Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks},
author={Noronha, Daniel H. and Salehpour, Bahar and Wilton, Steven J.E.},
year={2018},
month={jul},
archivePrefix={"arXiv"},
primaryClass={cs.LG}
}
Recent work has shown that Field-Programmable Gate Arrays (FPGAs) play an important role in the acceleration of Machine Learning applications. Initial specification of machine learning applications are often done using a high-level Python-oriented framework such as Tensorflow, followed by a manual translation to either C or RTL for synthesis using vendor tools. This manual translation step is time-consuming and requires expertise that limit the applicability of FPGAs in this important domain. In this paper, we present an open-source tool-flow that maps numerical computation models written in Tensorflow to synthesizable hardware. Unlike other tools, which are often constrained by a small number of inflexible templates, our flow uses Google’s XLA compiler which emits LLVM code directly from a Tensorflow specification. This LLVM code can then be used with a high-level synthesis tool to automatically generate hardware. We show that our flow allows users to generate Deep Neural Networks with very few lines of Python code.
July 21, 2018 by hgpu