LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

Prakriya, Neha, Ding, Zijian, Sun, Yizhou, Cong, Jason

arXiv.org Artificial Intelligence 

--FPGAs are increasingly adopted in datacenter environments for their reconfigurability and energy efficiency. High-Level Synthesis (HLS) tools have eased FPGA programming by raising the abstraction level from RTL to untimed C/C++, yet attaining high performance still demands expert knowledge and iterative manual insertion of optimization pragmas to modify the microarchitecture. T o address this challenge, we propose LIFT, a large language model (LLM)-based coding assistant for HLS that automatically generates performance-critical pragmas given a C/C++ design. On average, LIFT produces designs that improve performance by 3.52 and 2.16 than prior state-of the art AutoDSE and HARP respectively, and 66 than GPT -4o. Data center applications require high-performance, low-power, scalable, and reconfigurable hardware. With the end of Dennard's scaling [1], these requirements are becoming increasingly critical to address. FPGAs emerge as a powerful solution and in recent years have been adopted by major cloud providers such as A WS, Microsoft, and Alibaba in their servers. Despite their potential, FPGAs remain challenging to program and deploy efficiently. High-Level Synthesis (HLS) tools such as Vitis HLS [2], Merlin [3], and Intel HLS [4] aim to bridge this gap by raising the abstraction level from low-level RTL to C/C++.