29882

LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

Neha Prakriya, Zijian Ding, Yizhou Sun, Jason Cong
Computer Science Department, University of California – Los Angeles, USA
arXiv:2504.21187 [cs.LG], (29 Apr 2025)

@misc{prakriya2025liftllmbasedpragmainsertion,

   title={LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning},

   author={Neha Prakriya and Zijian Ding and Yizhou Sun and Jason Cong},

   year={2025},

   eprint={2504.21187},

   archivePrefix={arXiv},

   primaryClass={cs.LG},

   url={https://arxiv.org/abs/2504.21187}

}

Download Download (PDF)   View View   Source Source   

920

views

FPGAs are increasingly adopted in datacenter environments for their reconfigurability and energy efficiency. High-Level Synthesis (HLS) tools have eased FPGA programming by raising the abstraction level from RTL to untimed C/C++, yet attaining high performance still demands expert knowledge and iterative manual insertion of optimization pragmas to modify the microarchitecture. To address this challenge, we propose LIFT, a large language model (LLM)-based coding assistant for HLS that automatically generates performance-critical pragmas given a C/C++ design. We fine-tune the LLM by tightly integrating and supervising the training process with a graph neural network (GNN), combining the sequential modeling capabilities of LLMs with the structural and semantic understanding of GNNs necessary for reasoning over code and its control/data dependencies. On average, LIFT produces designs that improve performance by 3.52x and 2.16x than prior state-of the art AutoDSE and HARP respectively, and 66x than GPT-4o.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: