25400

ScaleHLS: Scalable High-Level Synthesis through MLIR

Hanchen Ye, Cong Hao, Jianyi Cheng, Hyunmin Jeong, Jack Huang, Stephen Neuendorffer, Deming Chen
University of Illinois at Urbana-Champaign
arXiv:2107.11673 [cs.PL], (3 Aug 2021)

@misc{ye2021scalehls,

   title={ScaleHLS: Scalable High-Level Synthesis through MLIR},

   author={Hanchen Ye and Cong Hao and Jianyi Cheng and Hyunmin Jeong and Jack Huang and Stephen Neuendorffer and Deming Chen},

   year={2021},

   eprint={2107.11673},

   archivePrefix={arXiv},

   primaryClass={cs.PL}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1167

views

High-level Synthesis (HLS) has been widely adopted as it significantly improves the hardware design productivity and enables efficient design space exploration (DSE). HLS tools can be used to deliver solutions for many different kinds of design problems, which are often better solved with different levels of abstraction. While existing HLS tools are built using compiler infrastructures largely based on a single-level abstraction (e.g., LLVM), we propose ScaleHLS, a next-generation HLS compilation flow, on top of a multi-level compiler infrastructure called MLIR, for the first time. By using an intermediate representation (IR) that can be better tuned to particular algorithms at different representation levels, we are able to build this new HLS tool that is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS is able to represent and optimize HLS designs at multiple levels of abstraction and provides an HLS-dedicated transform and analysis library to solve the optimization problems at the suitable representation levels. On top of the library, we also build an automated DSE engine to explore the multi-dimensional design space efficiently. In addition, we develop an HLS C front-end and a C/C++ emission back-end to translate HLS designs into/from MLIR for enabling the end-to-end ScaleHLS flow. Experimental results show that, comparing to the baseline designs only optimized by Xilinx Vivado HLS, ScaleHLS improves the performances with amazing quality-of-results — up to 768.1x better on computation kernel level programs and up to 3825.0x better on neural network models.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: